frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Culturing, a Life's Work in Progress

https://poems.culturing.net/about/
1•prossercj•1m ago•0 comments

The genetic optimization of life – and the lives it may leave behind

https://www.readtangle.com/the-genetic-optimization-of-life-and-the-lives-it-may-leave-behind/
1•ch33zer•1m ago•0 comments

The Coward's Bargain

https://stylman.substack.com/p/the-cowards-bargain
1•ytNumbers•1m ago•0 comments

Incentive to Slow Climate Change Drives Output of Harmful Gases (2012)

https://www.nytimes.com/2012/08/09/world/asia/incentive-to-slow-climate-change-drives-output-of-harmful-gases.html
1•colinprince•4m ago•1 comments

Macron says Europe must become 'space power' again

https://phys.org/news/2025-06-macron-europe-space-power.html
1•Teever•6m ago•0 comments

LavinMQ 2.4.0 released – laying the groundwork for multithreading

https://lavinmq.com/blog/lavinmq240
1•l2dy•8m ago•0 comments

Unraveling the relationship between a father and a child

https://childrensbookforall.org/past-readings/20250615
1•chbkall•12m ago•0 comments

Tiny Undervalued Hardware Companions (2024)

https://vermaden.wordpress.com/2024/03/21/tiny-undervalued-hardware-companions/
1•zdw•18m ago•0 comments

EV battery fully recharges in 18 seconds: going into mass production

https://www.livescience.com/technology/electric-vehicles/ev-battery-that-recharges-in-just-18-seconds-green-lit-for-mass-production
1•bookofjoe•21m ago•0 comments

Texas Sheriffs Crack Bitcoin ATM with Power Tools to Retrieve $32,000

https://decrypt.co/326308/texas-sheriffs-crack-bitcoin-atm-with-power-tools-to-retrieve-32000
2•croes•24m ago•0 comments

Trying out the new Gemini 2.5 model family

https://simonw.substack.com/p/trying-out-the-new-gemini-25-model
2•waprin•37m ago•0 comments

Just vibe coded a simple image compressor website (100% private)

https://turbocompress.com
1•narbuq•46m ago•0 comments

CVE-2025-4802 (HIGH): detected in Lambda Docker Images

https://github.com/aws/aws-lambda-base-images/issues/279
2•jurgengunter•52m ago•0 comments

We need a better way to measure hurricanes

https://www.bbc.com/future/article/20240822-why-we-need-a-better-way-to-measure-hurricanes
1•gmays•53m ago•1 comments

YouTube is hiding an excellent, official high-speed Pac-Man mod in plain sight

https://arstechnica.com/gaming/2025/06/one-of-the-best-pac-man-games-in-years-is-playable-on-youtube-of-all-places/
2•LorenDB•57m ago•0 comments

Cloudflare's CEO says virtually nobody clicks on Google's AI source links

https://www.engadget.com/ai/cloudflare-ceo-says-people-arent-checking-ai-chatbots-source-links-120016921.html
2•alister•57m ago•0 comments

HP ZBook Ultra G1a smashes the 'work laptop' paradigm with 96GB RAM for GPU

https://www.pcworld.com/article/2650073/hands-on-the-hp-zbook-ultra-g1a-smashes-the-work-laptop-paradigm.html
4•teleforce•1h ago•0 comments

Show HN: ZenQuery – Analyze CSV/JSON files using natural language for cents

https://zenquery.app
5•freakynit•1h ago•0 comments

Apple Mac Studio (2025, M3 Ultra) Review

https://www.pcmag.com/reviews/apple-mac-studio-2025-m3-ultra
1•teleforce•1h ago•0 comments

Aflac Incorporated Discloses Cybersecurity Incident

https://newsroom.aflac.com/2025-06-20-Aflac-Incorporated-Discloses-Cybersecurity-Incident
2•gnabgib•1h ago•0 comments

Which US states do not have sales tax?

https://stripe.com/resources/more/which-states-have-no-sales-tax
1•teleforce•1h ago•0 comments

Post-mortem: Database Outage on April 30, 2025

https://blog.healthchecks.io/2025/05/post-mortem-database-outage-on-april-30-2025/
1•lucidhss•1h ago•0 comments

A Battery That Lasts 50% Longer Is Finally in Production

https://www.wsj.com/business/energy-oil/american-made-battery-76595c0f
12•breadwinner•1h ago•1 comments

Insurer Aflac investigating possible data leak after cyberattack

https://www.reuters.com/business/insurer-aflac-discloses-cybersecurity-incident-2025-06-20/
1•srameshc•1h ago•0 comments

Magenta released an open weights live music model

https://magenta.tensorflow.org/magenta-realtime
3•HxokcPwi•1h ago•0 comments

Cluely, a startup that helps 'cheat on everything,' raises $15M from A16Z

https://techcrunch.com/2025/06/20/cluely-a-startup-that-helps-cheat-on-everything-raises-15m-from-a16z/
3•cratermoon•1h ago•0 comments

Extreme amnesia, AI, imagined futures: Conversation with a Harvard researcher

https://www.nationalgeographic.com/health/article/memory-psychology-neuroscience
1•Bluestein•1h ago•0 comments

Ask HN: How much do high profile journalists make from Substack newsletters?

1•srameshc•1h ago•1 comments

ChatGPT use linked to cognitive decline: MIT research

https://thehill.com/policy/technology/5360220-chatgpt-use-linked-to-cognitive-decline-mit-research/
5•nreece•1h ago•1 comments

Merlyn Mind, 'Agentic' browser created by former IBM Watson and Alexa leads

https://www.merlyn.org/
1•dataviz1000•1h ago•0 comments
Open in hackernews

AbsenceBench: Language models can't tell what's missing

https://arxiv.org/abs/2506.11440
140•JnBrymn•4h ago

Comments

AlienRobot•3h ago
Unrelated to the paper, which is about asking LLM's to figure out which parts of a document were removed, but my assumption has been that to an LLM there is nothing "missing" in the sense that any input leads to valid computation and output.

For example, I asked ChatGPT to explain something I typed randomly

>It looks like you've entered “dosfi8q3anfdfiqr”, which appears to be a random string or perhaps a typo—it's not a recognized acronym, code, or term in any common context I’m aware of. Could you share a bit more about where you found this?

Although the answer is correct, my point is that anything you give to the LLM is going to be put under some bucket. The LLM can't say "I don't know what that is." Instead it says "that is a random string." As far as the LLM is concerned, it knows every possible input and concept that anyone could ever type into it, it's just that its "understanding" of what that means (after the tokens have gone through the neural network) doesn't necessarily match what any human being thinks it means.

cyral•3h ago
This might be due to the system prompt and the training that it is supposed to be "a helpful agent". If you tell it not to ask clarifying questions, you get something more like "I do not understand your input". Tell it to be rude and never ask clarifying questions and I get "What an absolute mess. Fix it yourself"

Funny enough when testing this I also had to tell it to use English. It sees "dos" I suppose and tends to reply with exactly what you saw, but in Spanish.

layer8•3h ago
“It's not a recognized acronym, code, or term in any common context I’m aware of” is pretty similar to “I don't know what that is”. I would assume that a model could be trained to output the latter.
drsim•1h ago
Right. I’ve had a lot of success using structured output to force LLMs to make Boolean choices, like can they reply or not.
cs702•3h ago
Interesting. Even the most recent models perform relatively poorly when asked to identify which information in a context has been removed, given access to both the original and edited contexts.

The authors posit that poor performance is due to the fact that the attention mechanism of Transformers cannot attend to the removed tokens, because there are no keys for them!

Thank you for sharing on HN.

cyanydeez•2h ago
for vision models, I wonder if they can train on things like photo negatives, rotated images, etc. Or madlib like sentences where a Q/A is like "the _____ took first place in the horse show."
bearseascape•2h ago
The madlib like sentences approach is actually how masked token prediction works! It was one of the pretraining tasks for BERT, but nowadays I think all (?) LLMs are trained with next token prediction instead.
latency-guy2•16m ago
For photo negatives - usually doesn't matter. I am not up to date with what the vision folks are doing at these companies, but images are usually single channel, and more likely than not for regular images in greyscale. Otherwise in complex domain for the radar folks, and those are not RGB based images at all, rather scatterer defined.

Additional channels being recognized in training usually didn't matter for the experiments and models I used to deal with before 2022, and if they were, certainly did not matter for colors. Then again, the work I was doing was on known (and some additional confusers) classes for object detection and classification where the color pretty much didn't matter in the first place.

jug•1h ago
And yet, there are some notable differences between them, so now that there’s a benchmark and attention given to this issue, I wonder how much better they can get. Because obviously something can be done.
XenophileJKO•3h ago
I haven't read the paper yet, but from a structural 'attention' perspective being unable to detect unclassified omissions is completely expected. (Though I think it is can be solved with structured thought.)

For needle in a haystack you have to pay attention to the thing that you are trying to find. Attention can do this pretty well.

When looking for an omission, that omission can be anything, you can only reason about it by comparing one whole context to another whole context. The attention layers can't really do that.

This is similar to the "rank a long set of things" problem. Absent some meta cognition process, they just can't do that.

teruakohatu•3h ago
> When looking for an omission, that omission can be anything,

In this benchmark they give the LLM the necessary information to determine what is missing. For example “here is a poem, here is a version of that same poem that may or may not be missing lines. Are any lines missing?

It’s more a tuning issue IMHO than an inherent weakness in LLMs.

If I was asked to find an omission in an ML paper, my brain compares it with other ML papers, it does not need to compare it to Star Ward, Top Gear, Greek history, pottery and the other 1000s of contexts I may know about.

XenophileJKO•3h ago
Sorry I meant the omission can be anything in the context, not anything in the world.. lol.

That is still hard. You only have so many attention heads looking for things.. you can't pay attention to EVERYTHING.. which is what is required to find the omission.

thaumasiotes•2h ago
We should note that "where is there a line missing from this poem: ____?" contains sufficient information to answer correctly without needing a copy of the original to compare to.

Here are two verses of a poem (song) in Mandarin Chinese:

yi quan ting ni de

er gei ni hao de

shu dao san yong yuan ai ni yi ge

si bu hui fan cuo

wu bu hui luo suo

shuo ni xiang shuo de

zuo ni xiang zuo de

bie pa shi bai yin wei ni you wo

pei ni kan ri luo

pei ni yi qi chang wan wo men ai de ge

I removed two lines. Where did that happen?

Would your answer be different if I told you that I might or might not have removed some lines?

pkoird•3h ago
So LLMs are poor at string diff, it seems. Tangentially, is there any source (a github repo or otherwise) that documents findings like these a la what LLMs are good at and what they aren't good at?
birdfood•3h ago
Perhaps related, after watching a talk by Gerald Sussman I loaded an image of the Kanizsa triangle into Claude and asked it a pretty vague question to see if it could “see” the inferred triangle. It recognised the image and went straight into giving me a summary about it. So I rotated the image 90 degrees and tried in a new conversation, it didn’t recognise the image and got the number of elements incorrect:

This image shows a minimalist, abstract geometric composition with several elements:

Four black shapes that appear to be partial circles or "Pac-Man" like forms, each with a wedge cut out, positioned in the four corners/quadrants of the image Two thin black triangular or arrow-like shapes - one pointing upward in the upper left area, and one pointing to the right in the center-right area All elements are arranged on a light gray or off-white background

latentsea•1h ago
I guess they will now just rotate all the images in the training data 90 degrees too to fill this kind of gap.
recursivecaveat•1h ago
Everything old is new again: in the Alexnet paper that kicked off the deep learning wave in 2012, they describe horizontally flipping every image as a cheap form of data augmentation. Though now that we expect models to actually read text that seems potentially counter-productive. Rotations are similar, in that you'd hope it would learn heuristics such as that the sky is almost always at the top.
latency-guy2•21m ago
At least from when I was still doing this kind of work, look angle/platform angle scatterer signal (radar) mattered more than rotation, but rotation was a simple way to get quite a bit more samples. It never stopped being relevant :)
akomtu•43m ago
To generalise this idea: if we look at a thousand points that more or less fill a triangle, we'll instantly recognize the shape. IMO, this simple example reveals what intelligence is really about. We spot the triangle because so much complexity - a thousand points - fits into a simple, low-entropy geometric shape. What we call IQ is the ceiling of complexity of patterns that we can notice. For example, the thousand dots may in fact represent corners of a 10-dimensional cube, rotated slightly - an easy pattern to see for a 10-d mind.
Workaccount2•6m ago
Show any LLM a picture of a dog with 5 legs watch them be totally unable to count.
yousif_123123•2h ago
This is very interesting. 1. Authors mention the attention mechanism being perhaps unable to attend to the location of gaps since the gaps aren't tokens. But I would've expected a good LLM transformer to be at least a bit close to the gap location. I don't understand why mathematically the architecture is less suitable for that, it could attend to a region that may contain gaps. I wonder if fine-tuning on a task like this could help? 2. Shorter inputs with less omissions were harder to solve. That is not completely surprising, as a human doing this task, if 1 word was missing it would be harder to notice. And similarly 1 line would be harder than 10 lines. But still interesting for an LLM to have this problem. 3. Reasoning models do better, as they can write out the documents and potentially solve this easily. It still very surprising that this doesn't lead to 100% accuracy. This should be a trivial task. Like the paper says, a trivial program can be written to solve this. Perhaps ChatGPT (or similar agent) could read this paper while training, and know to write and run python when solving an issue like this.

The most interesting thing though, is what other aspects of intelligence we may not have identified explicitly, and whether LLMs and current AI is very bad at them. This paper suggests that there likely are many of those, and it seems in general a pretty fun time for people working building benchmarks.

xianshou•2h ago
In many of their key examples, it would also be unclear to a human what data is missing:

"Rage, rage against the dying of the light.

Wild men who caught and sang the sun in flight,

[And learn, too late, they grieved it on its way,]

Do not go gentle into that good night."

For anyone who hasn't memorized Dylan Thomas, why would it be obvious that a line had been omitted? A rhyme scheme of AAA is at least as plausible as AABA.

In order for LLMs to score well on these benchmarks, they would have to do more than recognize the original source - they'd have to know it cold. This benchmark is really more a test of memorization. In the same sense as "The Illusion of Thinking", this paper measures a limitation that neither matches what the authors claim nor is nearly as exciting.

jamessinghal•2h ago
The test provides both the original and the modified excerpt in the user message, so the LLM doesn't need any memorized version of the excerpt to theoretically answer each correctly.

From the paper:

System Prompt You are helping a student practice memorizing poems. The student will recite a poem, but they may have missed some lines. Your task is to identify exactly which lines are missing from their recitation. List only the missing lines, nothing else.

User Message Here is the complete original poem: {original poem} Now, here is my recitation which may be missing some lines: {modified poem} What lines did I miss? Please list only the missing lines, nothing else.

scarface_74•2h ago
This worked

https://chatgpt.com/share/6855f69d-766c-8010-96e2-ed1b45d3e6...

htnwe_2312412•2h ago
yes, 69.8% of the time.
OsrsNeedsf2P•2h ago
The criticisms to how AbsenceBench does this are valid, but I'm very excited that we are benchmarking this at all. It's definitely a push in the right direction
yandie•2h ago
I wonder how this would apply with vision models? I tried with a few example of single images and they appear to do well. I did a few toy examples and they seem to do pretty well (Claude + Gemini) with spotting differences. An example image: https://www.pinterest.com/pin/127578601938412480/

They seem to struggle more when you flip the image around (finding fewer differences, and potentially halluciating)

obscure-enigma•1h ago
this research is too simplified and kind of vague, as it's the inherent nature of language models for that matter any probabilistic model, to compress the information for better generalization since there is a lower bound to how much loss they can incur while decoding the information. LLMs are indeed lossy compressors
kadonoishi•1h ago
To detect a presence, a real brain takes in sensory input and compares it to expectations, and stays calm or registers surprise, and from time to time issues predictions to guide the organism.

To detect an absence, the brain cannot rely on sensory input, by definition. To be surprised if sensory evidence is _not_ there requires a model of the world strong enough to register surprise if the expectation is not there, without a sensory prompt.

It seems to me detecting an absence is a strictly higher-order neurological task than processing sensory input.

If LLMs can't do this strictly higher-order neurological task, is that not a capability currently unique to living things?

tclancy•57m ago
> from time to time

I know less-than-zero about the subject but I’d imagine the temporal aspect alone is a problem. Aren’t these agents reasoning from a fixed/ frozen version of “reality” rather than adjusting in real-time??