frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Calibre, AI, and one size not fitting all

https://neilzone.co.uk/2025/12/calibre-ai-and-one-size-not-fitting-all/
1•edward•41s ago•0 comments

Ten years ago in Hacker News frontpage

https://news.ycombinator.com/front?day=2015-12-08
1•kevin061•1m ago•0 comments

Australian Age Assurance Technology Trial– Final Report

https://www.infrastructure.gov.au/department/media/publications/age-assurance-technology-trial-fi...
1•Erikun•1m ago•0 comments

Show HN: Chromaflow – flow editor for parametric color palettes

https://chromaflow-editor.vercel.app/
1•pedroscosta•3m ago•0 comments

Everything Is Context: Agentic File System Abstraction for Context Engineering

https://arxiv.org/abs/2512.05470
1•robmao•3m ago•1 comments

Go Proposal: Secret Mode

https://antonz.org/accepted/runtime-secret/
1•enz•3m ago•0 comments

Running Rust, Go, Python, and JavaScript AI Agents Inside the JVM Using WASM

https://blog.mozilla.ai/polyglot-ai-agents-webassembly-meets-the-java-virtual-machine-jvm/
1•mzlaai•3m ago•0 comments

Howard Marks Says AI Is 'Terrifying' for Jobs, Queries Debt Cost

https://www.bloomberg.com/news/articles/2025-12-09/howard-marks-says-ai-is-terrifying-for-jobs-qu...
1•petethomas•3m ago•0 comments

SpaceX to Pursue 2026 IPO Raising Far Above $30B

https://www.bloomberg.com/news/articles/2025-12-09/spacex-said-to-pursue-2026-ipo-raising-far-abo...
1•mfiguiere•4m ago•0 comments

Chatting with Glue

https://a9.io/glue-comic/
1•duck•4m ago•0 comments

CppCon 2025: Building Secure C++ Applications: A Practical End-to-End Approach [video]

https://www.youtube.com/watch?v=GtYD-AIXBHk
1•pjmlp•5m ago•0 comments

Join the on-call roster, it'll change your life

https://serce.me/posts/2025-12-09-join-oncall-it-will-change-your-life
1•SerCe•6m ago•0 comments

I misused LLMs to diagnose myself and ended up bedridden for a week

https://blog.shortround.space/blog/how-i-misused-llms-to-diagnose-myself-and-ended-up-bedridden-f...
1•shortrounddev2•7m ago•0 comments

Zillow has removed extreme weather risk data

https://www.cnn.com/2025/12/02/climate/zillow-climate-data-extreme-weather-first-street-redfin
2•kevin061•7m ago•1 comments

OpenAI economist quits, alleging that they are verging into AI Advocacy

https://www.wired.com/story/openai-economic-research-team-ai-jobs/
3•gsf_emergency_6•7m ago•0 comments

Is Vibe Coding Safe? Benchmarking Vulnerability of Agent-Generated Code

https://arxiv.org/abs/2512.03262
2•flail•8m ago•0 comments

A standard language for machine-readable code comments

https://github.com/pomponchik/metacode
1•levzettelin•9m ago•1 comments

Japan's exhaust filter experts attract carbon capture companies' attention

https://asia.nikkei.com/business/technology/japan-s-exhaust-filter-expertise-attracts-carbon-capt...
1•gsf_emergency_6•10m ago•1 comments

Dynamic Island and Quake Terminal app, QuakeNotch 2.1 is released

https://www.patreon.com/posts/quakenotch-2-1-145469207
1•rohanrhu•11m ago•0 comments

React2Shell CVE 10.0 Vulnerability

https://react2shell.com/
1•jnovacho•12m ago•1 comments

Neural cellular automata: Applications to biology and beyond classical AI

https://www.sciencedirect.com/science/article/pii/S1571064525001757?dgcid=coauthor
1•lifty•17m ago•0 comments

Consolidated / Fidium Fiber ISP seems down in Maine (state-wide)

https://community.designtaxi.com/topic/20787-is-consolidated-fidium-fiber-down-december-9-2025/
1•gregsadetsky•19m ago•0 comments

Lets Encrypt Certificate Lifetimes go from 90 days to 45 days

https://letsencrypt.org/2025/12/02/from-90-to-45
3•nvader•23m ago•1 comments

Context Engineering in Manus

https://rlancemartin.github.io/2025/10/15/manus/
2•speckx•23m ago•0 comments

Are Two Heads Better Than One?

https://eieio.games/blog/two-heads-arent-better-than-one/
1•eieio•24m ago•0 comments

Static Sites, Stupid Simple

https://statue.dev/blog/static-sites-stupid-simple/
1•brantf•25m ago•0 comments

Trying out the queue for AI Workloads

https://leblancfg.com/trying-absurd-postgres-workflows.html
1•ingve•25m ago•0 comments

Cockpit audio of Alaska Airlines pilot who tried to crash plane mid-flight

https://nypost.com/2025/12/09/us-news/wild-new-cockpit-audio-reveals-moment-alaska-airlines-pilot...
1•appreciatorBus•26m ago•0 comments

Linux Foundation Announces the Formation of the Agentic AI Foundation

https://aaif.io/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation-aaif-...
3•bretpiatt•26m ago•2 comments

U.S. F-18 fighter jets enter Venezuelan airspace for 40 minutes

https://www.miamiherald.com/news/nation-world/world/americas/venezuela/article313550838.html
4•belter•28m ago•0 comments
Open in hackernews

LLMs as Unbiased Oracles

https://jazzberry.ai/blog/test-generation-as-the-foundation
34•MarcoDewey•7mo ago

Comments

Jensson•7mo ago
> An LLM, specifically trained for test generation, consumes this specification. Its objective is to generate a diverse and comprehensive test suite that probes the specified behavior from an external perspective.

If one of these tests are wrong though it will ruin the whole thing. And LLM are much more likely to make a math error (which would result in a faulty test) than to implement a math function the wrong way, so this probably wont make it better at generating code.

MarcoDewey•7mo ago
I think this is a seriously excellent point.

The bet that I am making is that the system reduces its error rate by splitting a broad task into two more focused tasks.

However, it is possible that generating meaningful test cases is a harder problem (with a higher error rate) than producing code. If this is the case, then this idea I am presenting would compound the error rate.

satisfice•7mo ago
If your premises and assumptions are sufficiently corrupted, you can come to any conclusion and believe you are being rational. Like those dreams where you walk around without pants on and you are more worried about not having pants than you are about how it could have come to be that your pants kept going missing. Your brain is not present enough to find the root of the problem.

An LLM is not unbiased, and you would know that if you tested LLMs.

Apart from biases, an LLM is not a reliable oracle, you would know that if you tested LLMs.

The reliabilities and unreliabilities of LLMs vary in discontinuous and unpredictable ways from task to task, model to model, and within the same model over time. You would know this if you tested LLMs. I have. Why haven’t you?

Ideas like this are promoted by people who don’t like testing, and don’t respect it. That explains why a concept like this is treated as equivalent to a tested fact. There is a name for it: wishful thinking.

walterbell•7mo ago
> wishful thinking

Given the economic component of LLM wishes, we can look at prior instances of wishing-at-scale, https://en.wikipedia.org/wiki/Tulip_mania

troupo•7mo ago
There's a more recent one: https://blog.mollywhite.net/blockchain/
roenxi•7mo ago
Blockchains are past the gauntlet where they can be described as a mania, it is clear they are a permanent addition to the world of finance; probably as a multi-billion or -trillion dollar market cap asset class. If crypto was going to fail the interest rate rises would have done it by now.
troupo•7mo ago
Tulips. You're describing tulips.
MarcoDewey•7mo ago
I believe that I have unintentionally misled you. When I say "unbiased oracle" I am talking specifically about the test oracle being unbiased by how the software was implemented. ie. Black Box testing.

I don't think I made the point very clear in the blog (I will rectify that), but I am saying that because LLMs are so easily biased by their prompting that they sometimes perform better when doing black box testing tasks than they do when performing white box testing.

satisfice•7mo ago
I appreciate that you replied. It warms my heart, frankly. It gives me hope.

I don't want to have a big argument about this right at this moment. But-- truly-- thank you for replying!

TazeTSchnitzel•7mo ago
Is this a blogpost that's incomplete or a barely disguised ad?
saagarjha•7mo ago
You'd think AI would have told them not to post it
mock-possum•7mo ago
It’s hard to convince LLMs to be anything but supportive - lately I’ve been finding joy in reading its tone as patronizing.

“Exactly — that’s a very clean way to lay it out. You nailed it.”

brahyam•7mo ago
The amount of time it would take to write the formal spec for the code I need is more than it would take to generate the code so doesn't sound like something that will go mainstream. Except for those industries where formal code specs are already in place.
MarcoDewey•7mo ago
Yes, this test-driven approach will likely increase generation time upfront. However, the payoff is more reliable code being generated. This will lead to less debugging and fewer reprompts overall, which saves time in the long run.

Also agree on the specification formality. Even a less formal spec provides a clearer boundary for the LLM during code generation, which should improve code generation results.

bluefirebrand•7mo ago
LLMs are absolutely biased

They are biased by the training dataset, which probably also reflects the biases of the people who select the training dataset

They are biased by the system prompts that are embedded into every request to keep them on the rails

They are even biased by the prompt that you write into them, which can lead them to incorrect conclusions if you design the prompt to lead them to it

I think it is a very careless mistake to think of LLMs as unbiased or neutral in any way

MarcoDewey•7mo ago
You are correct that the notion of LLMs being completely unbiased or neutral does not make sense due to how they are trained. Perhaps my title is even misleading if taken at face value.

When I talk about "unbiased oracles" I am speaking in the context of black box testing. I'm not suggesting they are free from all forms of bias. Instead, the key distinction I'm trying to draw is their lack of implementation-level bias towards the specific code they are testing.

gwern•7mo ago
LLMs are also heavily biased after chatbot tuning leads to mode-collapse. That's why you see the same verbal tics coming out of them, like the em-dashes or the 'twist ending' in the more recent 4os. And if LLMs really were unbiased, you'd expect better scaling when you tried to bruteforce code correctness. Training a 'test LLM' will just wind up inheriting a lot of the shared blindspots. They aren't independent of the implementation at all (just like humans are not independent, even when they didn't write the original, and didn't see it either; and this is why you can't simply throw _n_ programmers at a piece of code and be certain you got all the bugs, and why fuzzers will continue to rampage through code).
stuaxo•7mo ago
The code correctness part is very true.

I don't mind LLMs as part of a journey on code, but it shouldn't be the end product.

I see something submitted by a colleague that doesn't fit the problem we have + tech well, go and ask an LLM and it outputs very similar code.

It's clear at that point that they submitted heavily LLMs produced code without giving it the work it needed.

Muromec•7mo ago
This and state actors target ai crawlers specifically ti pouson llms with propaganda
ninetyninenine•7mo ago
No this is just a very overly pedantic and technical way of looking at it.

First of all you'll note that all people are also biased by the Exact same reasoning. You know this. Everyone knows that all people are biased. This isn't something you don't know.

So if every single intelligence, human or not is biased. What is this article truly talking about? The article is basically saying LLMs are LESS biased then humans. Why are LLMs less biased then humans? Well maybe because the training set in an LLM is less biased then the training set given to a human. This makes sense right? A human will be made more biased by his individual experience and his parents biases while an LLM is literally inundated with as many sources of textual information as possible with no attempt at bias due to the sheer volume of knowledge they are trying to shove in there.

The article is basically referring to this.

But you will note interestingly that LLMs bias towards textual data more. They understand the world as if they have no eyes and ears and only text. So the way they think reflects this bias. But in terms of textual knowledge I think we can all agree, they are Less biased then humans.

Evidence: an LLM is not an atheist or a theist or an agnostic. But you, reader, are at the very least one of those three things.

neuroelectron•7mo ago
Yeah that would be cool
MarcoDewey•7mo ago
improving code generation would be awesome :)
neuroelectron•7mo ago
Unfortunately, Microsoft/Google needs those models for themselves.
fallinditch•7mo ago
I think it makes a lot of sense to employ various specialized LLMs in the software development lifecycle: one that's good at ideation and product development, one that fronts the organizational knowledge base, one for testing code, one (or more) for coding, etc, maybe even one whose job it is to always question your assumptions.
Mbwagava•7mo ago
Unbiased seems like a pipe-dream. Unbiased between which perspectives? Would the set of perspectives chosen not be de-facto bias?
sega_sai•7mo ago
I think the unbiasedness is completely red herring here, but do I agree with the point on focusing on the tests separately and implementations separately. Ideally you'd want two completely different LLMs work on both. But I think the question is, how trustworthy are the LLM tests ? Will the human review of these take more time than writing of the how code ? I think for non-critical applications, it probably does not matter, but in the end I think people will be looking for some guarantees or confidence that the errors happen with frequency less than X%. And I don't think those exist now. And given the models change so frequently it's also hard to be sure if something was working fine yesterday whether it'll be today.
MarcoDewey•7mo ago
I believe that the unprecedented scale of LLM-generated code will demand a novel approach to software review and testing. Human review may not be able to keep up (or will it become the bottleneck?)