frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

YORO Increases VR Frame Rates by Rendering One Eye and Synthesizing the Other

https://www.uploadvr.com/you-only-render-once-vr-frame-rate-improving-technique/
1•LorenDB•11s ago•0 comments

Show HN: Stop copy-pasting Google Maps data – scrape it in one click

https://listcrawling.online
1•combineimages•28s ago•0 comments

Electric bikes might just be the healthiest thing to ever happen to teenagers

https://electrek.co/2025/08/05/electric-bikes-might-just-be-the-healthiest-thing-to-ever-happen-to-teenagers/
3•harambae•3m ago•0 comments

It Sure Sounds Like Assassin's Creed Shadows Is Coming to Switch 2 This Year

https://kotaku.com/assassins-creed-shadows-switch-2-port-release-date-1851786749
1•PaulHoule•4m ago•0 comments

Google Labs Experiments

https://labs.google/experiments/
1•simonpure•4m ago•0 comments

Show HN: An Open-Source E-Book Reader for Conversational Reading with an LLM

https://github.com/shutootaki/bookwith
2•takigon•4m ago•0 comments

Show HN: AWS CodePipeline events via MQTT – with Tailscale support

https://www.npmjs.com/package/codepipeline-mqtt-notifier-cdk-construct
1•nkorai•5m ago•0 comments

In Defense of Describable Dating Preferences

https://www.astralcodexten.com/p/in-defense-of-describable-dating
2•andsoitis•7m ago•0 comments

Automated Browser Testing with Claude Code Agents and Browserbase

https://ritza.co/articles/automated-browser-testing-for-ai-development-with-browserbase-and-claude-code-agents/
1•sixhobbits•7m ago•0 comments

Palantir Won over Washington–and Pushed Its Stock Up 600%

https://www.wsj.com/tech/palantir-pltr-stock-success-government-contracts-f3b2d453
2•jgalt212•8m ago•0 comments

The Disturbing Implications of Jim Acosta's AI Interview

https://weaponizedspaces.substack.com/p/the-disturbing-implications-of-jim
1•rbanffy•9m ago•0 comments

As AI Changes Internet Search, Reddit Lies in a Sweet Spot

https://www.wsj.com/tech/ai/reddit-rddt-stock-ai-search-2dcc69a4
2•impish9208•10m ago•3 comments

Notion is putting ads in your Slack now

https://imgur.com/a/xHZcT20
2•bcardarella•11m ago•0 comments

Optimal Allocation

https://varietyiq.com/blog/allocation
1•efavdb•11m ago•0 comments

A review of the Julia language (2014/2022)

https://danluu.com/julialang/
1•Qem•12m ago•0 comments

303Gen – 303 acid loops generator

https://303-gen-06a668.netlify.app/
1•ankitg12•16m ago•0 comments

Show HN: Lebenslauf – A CV builder with Markdown, templates, and local storage

https://cvmd.vercel.app/
1•bimals•18m ago•0 comments

Forward Deployed Engineering Principles

https://builders.ramp.com/post/forward-deployed-engineering
1•memset•20m ago•0 comments

Vanishing Culture: Why Preserve Flash?

https://blog.archive.org/2025/08/06/vanishing-culture-why-preserve-flash/
1•TangerineDream•21m ago•0 comments

Information about the 1991 Münich Software Festival?

https://catless.ncl.ac.uk/Risks/11.19.html#subj1
1•dement•22m ago•1 comments

Deploy a Python Flask App to Render with Docker

https://blog.appsignal.com/2025/08/06/deploy-a-python-flask-app-to-render-with-docker.html
1•unripe_syntax•24m ago•0 comments

Accidentally turned a ChatGPT prompt into a startup.

https://magicnode.ai/
1•zuhaib-rasheed•27m ago•2 comments

Sync Secrets from K8s to Vault

https://github.com/danieldonoghue/vault-sync-operator
1•O5ten•27m ago•0 comments

I Spent $500 to Test Devin for Prompt Injection So That You Don't Have To

https://embracethered.com/blog/posts/2025/devin-i-spent-usd500-to-hack-devin/
3•kerng•28m ago•0 comments

Q&A: Algorithmic Tyranny – By Caroline Orr Bueno, PhD

https://weaponizedspaces.substack.com/p/q-and-a-algorithmic-tyranny
1•rbanffy•29m ago•0 comments

Grok generates fake Taylor Swift nudes without being asked

https://arstechnica.com/tech-policy/2025/08/grok-generates-fake-taylor-swift-nudes-without-being-asked/
12•juujian•29m ago•0 comments

Show HN: Magic Sandbox – An AI app platform

https://magicsandbox.ai
1•k_kelleher•29m ago•0 comments

UK's Ministry of Defence pins hopes on AI to stop the next email blunder

https://www.theregister.com/2025/08/06/mod_taps_aussie_ai_shop/
1•rntn•32m ago•0 comments

Show HN: SpeedVitals RUM – Real-User Monitoring and Analytics Built for Privacy

https://speedvitals.com/real-user-monitoring/
2•kashishkumawat•32m ago•0 comments

At the Tesla Diner, the Future Looks Mid

https://www.nytimes.com/2025/08/05/dining/tesla-diner-elon-musk.html
1•mitchbob•33m ago•1 comments
Open in hackernews

LLM Inflation

https://tratt.net/laurie/blog/2025/llm_inflation.html
47•ingve•2h ago

Comments

jasode•1h ago
>Creating the necessary prose is torturous for most of us, so Bob fires up the LLM du jour, types in “Please create a 4 paragraph long business case for my manager, explaining why I need to replace my old, slow computer” and copies the result into his email.

>Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM du jour and types at the start “Please summarise this email for me in one sentence”. The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.”

Sam Altman actually had a concise tweet about this blog's topic (https://x.com/sama/status/1631394688384270336)

>something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points 2:42 PM · Mar 2, 2023 · 1.2M Views

unglaublich•1h ago
It's not really strange, is it? Business and politeness etiquette requires a certain phrasing that is typically more tedious than the core information of a message.

Now that decorating any message with such fluff is automated, we can as well drop the requirement and just state clearly what we want without fluff.

watwut•53m ago
I do not know about where these people live, but managers did not read long emails for years already. Not that blame them, but this world where they would actually want those 4 paragraphs essays did not existed for years.
unglaublich•1h ago
An LLM is effectively a compressed model of its input data.

Inference is then the decompression stage where it generates text from the input prompt and the compressed model.

Now that compressing and decompressing texts is trivial with LLMs, we humans should focus - in business at least - on communicating only the core of what we want to say.

If the argument to get a new keyboard is: "i like it", then this should suffice, for inflated versions of this argument can be trivially generated.

tomrod•55m ago
The inverse of this is "AI Loopidity" where we burn cycles inflating then deflating information (in emails, say, or in AI code that blows up then gets reduced or summarized). This often also leads to weird comms outcomes, like saving a jpg at 85% a dozen times.
AIPedant•46m ago
What I hate about this is that often a novel and interesting idea truly needs extra space to define and illustrate itself, and by virtue of its novelty LLMs will have substantially more difficulty summarizing it correctly. But it sounds like we are heading to a medium-term where people cynically assume any long email must be LLM-generated fluff, and hence nothing is lost by asking for an LLM summary.

What a horrible technology.

onlyrealcuzzo•38m ago
> If the argument to get a new keyboard is: "i like it", then this should suffice

This seems like exactly what LLMs are supposed to be good at, according to you, so why don't they just near-losslessly compress the data first, and then train on that?

Also, if they're so good at this, then why are their answers often long-winded and require so much skimming to get what I want?

I'm skeptical LLMs are accurately described as "near lossless de/compression engines".

If you change the temperature settings, they can get quite creative.

They are their algorithm, run on their inputs, which can be roughly described as a form of compression, but it's unlike the main forms of compression we think of - and it at least appears to have emergent decompression properties we aren't used to.

If you up the lossy-ness on a JPEG, you don't really end up with creative outputs. Maybe you do by coincidence, and maybe you only do with LLMs - but at much higher rates.

Whatever is happening does not seem to be what I think people typically associate with simple de/compression.

Theoretically, you can train an LLM on all of Physics, except a few things, and it could discover the missing pieces through reasoning.

Yeah, maybe a JPEG could, too, but the odds of that seem astronomically lower.

1980phipsi•34m ago
Be more trivial
stocksinsmocks•1h ago
> One of the signal achievements of computing is data compression

Ah, yes. It is an achievement in signals in a way.

watwut•50m ago
> Bob needs a new computer for his job. In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.

Is this situation in any way realistic one? Because the way companies work in my beck of woods, no one wants your 4 paragraph business case essay about computer. Like, it is funny anecdote.

But, in real world, at least in my experience, pretty much everyone preferred short for emails and messages. They would skim the long ones at best, especially in situation that can be boiled down to "Tom wants a new computer and is verbose about it".

dspillett•38m ago
You give the concise version to the person who is going to authorise your request. The four paragraph version goes on record for the people that person needs to justfy the descision to, they'll likely declare “I don't see a problem here” without actually reading it which is the intention: they might be more wont to question the shorter version.
roxolotl•44m ago
My PM said they’d written a bunch of tickets for a project yesterday morning that we hadn’t fully scoped yet. I was pleasantly surprised because I can’t complain if they are going to get ahead of things and start scaffolding tickets.

Of course when I went to read them they were 100% slop. The funniest requirement were progress bars for actions that don’t have progress. The tickets were, even if you assume the requirements weren’t slop, at least 15 points a piece.

But ok maybe with all of these new tools we can respond by implementing these insane requirements. The real problem is what this article is discussing. Each ticket was also 500-700 words. Requirements that boil down to a single if statement were described in prose. While this is hilarious the problem is it makes them harder to understand.

I tried to explain this and they just said “ok fine rewrite them then”. Which I did in maybe 15min because there wasn’t actually much to write.

At this point I’m at a loss for how to even work with people that are so convinced these things will save time because they look at the volume of the output.

kasey_junk•32m ago
Ask an llm for a project plan and they’ll happily throw dates around for each step, when they can’t possibly know how long it will take.

But project plan dates have always been fiction. Getting there faster is an efficiency win.

That said I’ve found that llms are good as interrogators. If used to guide a conversation, research background information and then be explicitly told to tersely outline the steps in something I’ve had very good results.

danielbln•13m ago
The date/week estimations in plans are especially funny when you work with an agent and it spits that out. "Week 1, setting up the structure" - uh, no, we're going to do this in 10 minutes.
dimitri-vs•26m ago
The only acceptable response to obvious AI slop - unless it's it's clear it's been heavily reviewed and updated - is to put it back into the AI and ask it for a 1 paragraph summary and work off of that.
waynenilsen•10m ago
The software requirements phase is becoming increasingly critical to the development lifecycle and that trend will continue. I have started writing very short tickets and having claude code inflate them, then I polish those. I often include negative prompts at this point so claude may have included "add a progress bar for xyz" and i simply add "do not" in front of those things that do not make sense. The results have been excellent.
dspillett•42m ago
This type of verbiage inflation was happening in business all the time anyway. LLMs are just being used as a method for doing it faster.
tovej•38m ago
Was it? I don't remember ever running into anyone preferring long documents. Also, anything added by the LLM is pure noise with the possibility of a hallucination or two. If, for some reason, you might even add some relevant information at times, and you're not going to start making things up.

And if an LLM is also used at the other endpoint to parse the longer text, that creates a broken telephone. Congrats, your communication channel is now unreliable.

santiagobasulto•34m ago
> the load on my server is reduced

isn't this the opposite? Enabling compression will INCREASE the load on your server as you need more CPU to compress/decompress the data.

reactordev•20m ago
It depends on where the bottleneck is. If it’s in network packet size, it would help serve more clients. At the expense of more CPU needed to decode/encode the data. If you’re serving large files and you have headroom in hardware it’s totally normal.
have_faith•20m ago
Depends on efficiency of compression I guess. If X bytes of data takes N time to transmit, and each slice of N takes Y CPU cycles during transmission, how many Y must your compression algorithm use, and how low must it lower N, in order to be more efficient from a CPU utilisation perspective? assumedly there's an inflection point from CPU use perspective, maybe a point that's impractical to achieve? I'm just vibe-thinking-out-loud.
Koffiepoeder•9m ago
Or your server can cache the compressed content (since it is a static page anyway).
PeterStuer•34m ago
I call BS.

The 4 paragraphs requirement was not introduced 'because LLM'. It was there all along for what just should have been 'gimme 2 -3 bullet points'. They wanted Bob to hold back on requesting the new machine he needed, not by denying his request openly, but by making the process convoluted. Now Bob can cut through the BS, they want to blame the LMM for wasting their* time and resources? BS!

tharne•12m ago
Maybe, but another way to look at this is that if someone is going to hold back requesting a new machine because they don't feel like writing a few paragraphs, then they probably don't really need a new computer. On the other hand, if their current machine really is so bad that it's getting in the way of their work they won't hesitate to quickly bang out 4 paragraphs to get a new one. Obviously, this trick does not work with LLMs in the mix.
henriquegodoy•28m ago
Communication is one of the biggest problems that’s why God used it in the Tower of Babel analogy.
thimabi•25m ago
One of the things that makes me hopeful for the future of LLMs is precisely this: humans are needlessly verbose, and LLMs can cut through the crap.

I expect smaller models to become incrementally better at compressing what truly matters in terms of information. Books, reports, blog posts… all kinds of long-form content can be synthesized in just a few words or pages. It’s no wonder that even small LLMs can provide accurate results for many queries.

tharne•16m ago
> humans are needlessly verbose

What a depressing belief. Human communication is about a whole lot more than just getting your point across as quickly and efficiently as possible.

danielbln•20m ago
This image originally came out just around the time of ChatGPT release and captures it well: https://i.imgur.com/RHGD9Tk.png
zacksiri•17m ago
The problem described in this post has nothing to do with LLMs. It has everything to do with work culture and bureaucracy. Rules and laws that don't make sense remain because changing it requires time, energy and effort that most people in companies have either tried and failed or don't care enough to make a change.

This is one example of the "horseless carriage" AI solutions. I've begun questioning further that actually we're going into a generation where a lot of the things we are doing now are not even necessary.

I'll give you one more example. The whole "Office" stack of ["Word", "Excel", "Powerpoint"] can also go away. But we still use it because change is hard.

Answer me this question. In the near future if we could have LLMs that can traverse to massive amount of data why do we need to make excel sheets anymore? Will we as a society continue to make excel spreadsheets because we want the insights the sheet provides or do we make excel sheets to make excel sheets.

The current generation of LLM products I find are horseless carriages. Why would you need agents to make spreadsheets when you should just be able to ask the agent to give you answers you are looking for from the spreadsheet.

amelius•17m ago
Perhaps we should judge the performance of an LLM by how well it can compress arbitrary information. A higher IQ would mean more compression, after all.
a_shovel•10m ago
Which LLMs perform better or worse will be determined entirely by the scoring formula used and how it penalizes errors. It is not in the nature of an LLM to be capable of lossless compression.
djoldman•3m ago
> Bob needs a new computer for his job.... In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.

> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.

"LLM inflation" as a "bad" thing often reflects a "bad" system.

In the case described, the bad system is the expectation that one has to write, or is more likely to obtain a favorable result from writing, a 4 paragraph business case. Since Bob inflates his words to fill 4 paragraphs and the manager deflates them to summarise, it's clear that the 4 paragraph expectation/incentive is the "bad" thing here.

This phenomenon of assigning the cause of "bad" things to LLMs is pretty rife.

In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.