frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Bose is open-sourcing its old smart speakers instead of bricking them

https://www.theverge.com/news/858501/bose-soundtouch-smart-speakers-open-source
392•rayrey•1h ago•64 comments

The Jeff Dean Facts

https://github.com/LRitzdorf/TheJeffDeanFacts
134•ravenical•3h ago•57 comments

An Honest Review of Go

https://benraz.dev/blog/golang_review.html
9•benrazdev•34m ago•1 comments

Lights and Shadows (2020)

https://ciechanow.ski/lights-and-shadows/
167•kg•5d ago•20 comments

Show HN: DeepDream for Video with Temporal Consistency

https://github.com/jeremicna/deepdream-video-pytorch
26•fruitbarrel•2h ago•10 comments

Project Patchouli: Open-source electromagnetic drawing tablet hardware

https://patchouli.readthedocs.io/en/latest/
340•ffin•10h ago•36 comments

AI Coding Assistants Are Getting Worse

https://spectrum.ieee.org/ai-coding-degrades
77•voxadam•51m ago•77 comments

A closer look at a BGP anomaly in Venezuela

https://blog.cloudflare.com/bgp-route-leak-venezuela/
272•ChrisArchitect•9h ago•132 comments

Show HN: A Daily Bible Game

https://bibdle.com
26•egglemonsoup•1h ago•18 comments

Japanese electronics store pleads for old PCs amid ongoing hardware shortage

https://www.tomshardware.com/desktops/pc-building/major-japanese-electronics-store-begs-customers...
24•speckx•53m ago•10 comments

Open Infrastructure Map

https://openinframap.org
297•efskap•12h ago•64 comments

Kernel bugs hide for 2 years on average. Some hide for 20

https://pebblebed.com/blog/kernel-bugs
234•kmavm•13h ago•109 comments

Eat Real Food

https://realfood.gov
1003•atestu•22h ago•1368 comments

Mothers (YC X26) Is Hiring

https://jobs.ashbyhq.com/9-mothers
1•ukd1•4h ago

The price of fame? Mortality risk among famous singers

https://jech.bmj.com/content/early/2025/11/30/jech-2025-224589
25•ingve•4d ago•17 comments

The Napoleon Technique: Postponing things to increase productivity

https://effectiviology.com/napoleon/
177•Khaine•3d ago•96 comments

Shipmap.org

https://www.shipmap.org/
708•surprisetalk•1d ago•111 comments

Go.sum is not a lockfile

https://words.filippo.io/gosum/
128•pabs3•12h ago•54 comments

Lessons from Hash Table Merging

https://gist.github.com/attractivechaos/d2efc77cc1db56bbd5fc597987e73338
58•attractivechaos•6d ago•13 comments

Anyone have experiences with Audio Induction Loops?

https://en.wikipedia.org/wiki/Audio_induction_loop
48•evolve2k•3d ago•27 comments

ChatGPT Health

https://openai.com/index/introducing-chatgpt-health/
379•saikatsg•20h ago•497 comments

Tailscale state file encryption no longer enabled by default

https://tailscale.com/changelog
325•traceroute66•19h ago•129 comments

Looking for Alice (2023)

https://www.henrikkarlsson.xyz/p/looking-for-alice
7•noleary•5d ago•1 comments

The Q, K, V Matrices

https://arpitbhayani.me/blogs/qkv-matrices/
166•yashsngh•1d ago•66 comments

LaTeX Coffee Stains (2021) [pdf]

https://ctan.math.illinois.edu/graphics/pgf/contrib/coffeestains/coffeestains-en.pdf
369•zahrevsky•1d ago•87 comments

How Google got its groove back and edged ahead of OpenAI

https://www.wsj.com/tech/ai/google-ai-openai-gemini-chatgpt-b766e160
188•jbredeche•23h ago•247 comments

The virtual AmigaOS runtime (a.k.a. Wine for Amiga:)

https://github.com/cnvogelg/amitools/blob/main/docs/vamos.md
99•doener•15h ago•24 comments

Our Changing Planet, as Seen from Space

https://e360.yale.edu/digest/nasa-satellite-images-2025
6•YaleE360•50m ago•0 comments

Musashi: Motorola 680x0 emulator written in C

https://github.com/kstenerud/Musashi
104•doener•15h ago•10 comments

NPM to implement staged publishing after turbulent shift off classic tokens

https://socket.dev/blog/npm-to-implement-staged-publishing
193•feross•21h ago•93 comments
Open in hackernews

Show HN: KeelTest – AI-driven VS Code unit test generator with bug discovery

https://keelcode.dev/keeltest
28•bulba4aur•1d ago
I built this because Cursor, Claude Code and other agentic AI tools kept giving me tests that looked fine but failed when I ran them. Or worse - I'd ask the agent to run them and it would start looping: fix tests, those fail, then it starts "fixing" my code so tests pass, or just deletes assertions so they "pass".

Out of that frustration I built KeelTest - a VS Code extension that generates pytest tests and executes them, got hooked and decided to push this project forward... When tests fail, it tries to figure out why:

- Generation error: Attemps to fix it automatically, then tries again

- Bug in your source code: flags it and explains what's wrong

How it works:

- Static analysis to map dependencies, patterns, services to mock.

- Generate a plan for each function and what edge cases to cover

- Generate those tests

- Execute in "sandbox"

- Self-heal failures or flag source bugs

Python + pytest only for now. Alpha stage - not all codebases work reliably. But testing on personal projects and a few production apps at work, it's been consistently decent. Works best on simpler applications, sometimes glitches on monorepos setups. Supports Poetry/UV/plain pip setups.

Install from VS Code marketplace: https://marketplace.visualstudio.com/items?itemName=KeelCode...

More detailed writeup how it works: https://keelcode.dev/blog/introducing-keeltest

Free tier is 7 tests files/month (current limit is <=300 source LOC). To make it easier to try without signing up, giving away a few API keys (they have shared ~30 test files generation quota):

KEY-1: tgai_jHOEgOfpMJ_mrtNgSQ6iKKKXFm1RQ7FJOkI0a7LJiWg

KEY-2: tgai_NlSZN-4yRYZ15g5SAbDb0V0DRMfVw-bcEIOuzbycip0

KEY-3: tgai_kiiSIikrBZothZYqQ76V6zNbb2Qv-o6qiZjYZjeaczc

KEY-4: tgai_JBfSV_4w-87bZHpJYX0zLQ8kJfFrzas4dzj0vu31K5E

Would love your honest feedback where this could go next, and on which setups it failed, how it failed, it has quite verbose debug output at this stage!

Comments

ericyd•1d ago
I'd be curious to hear more about how it determines when a failure is a source code bug. In my experience it's very hard to encapsulate the "why" of a particular behavior in a way the agents will understand. How does this tool know that the test it wrote indicates an issue in the source vs an issue in the test?
bulba4aur•1d ago
Hey, thanks for the question.

So from my experience with the LLMs if you ask them directly "is this a bug or a feature" they might start hallucinating and assume stuff that isn't there.

I found in a few research/blog posts that if you ask the LLM to categorize (basically label) and provide score in which category this issue belongs it performs very very well.

So that's exactly what this tool does, when it sees the failing test it formulates the prompt in a following way:

## SOURCE CODE UNDER TEST: ## FAILED TEST CODE: ## PYTEST FAILURE FOR THIS TEST: ## PARSED FAILURE INFO: ## YOUR TASK: Perform a deep "Step-by-Step" analysis to determine if this failure is: 1. *hallucination*: The test expects behavior, parameters, or side effects that do NOT exist in the source code. 2. *source_bug*: The test is logically correct based on the requirements/signature, but the source code has a bug (e.g., missing await, wrong logic, typo). 3. *mock_issue*: The test is correct but the technical implementation of mocks (especially AsyncMock) is problematic. 4. *test_design_issue*: The test is too brittle, over-mocked, or has poor assertions.

Then it also assigns the "confidence" score to it's answer, based on that either full regeneration of the tests proceeds, commenting on the bug in the test, fixing mocks or full test redesign (if it's to brittle)

While this is not 100% bullet proof, i found this to be quite effective way - basically using LLM for the categorization.

Hope that answers your question!

bulba4aur•1d ago
To clarify, each failing test triggers "review" agent, to determine "why" the test fails, and again, it can be improved with better heuristics probably, more in depth static analysis than the source code, but it is how it works in the current version.
arthurstarlake•20h ago
i wonder if always having a design doc of some substance discussing the intended behavior of the whole app would help reduce instances of hallucination. The human developer should create it and let it be accessed by the AI
bulba4aur•20h ago
100% agree with that
hrimfaxi•1d ago
How exactly do credits work? Your pricing mentions files and functions but doesn't appear to give a true unit of measure.
bulba4aur•1d ago
Hey, thanks for the feedback, i will make sure to make it more visible/less confusing. So the model is actually quite simple.

1 credit - 1 file up to 15 functions. <-- only this tier is available in alpha, due to current limitations in the implementation, i tried generating on bigger files and it took quite a long time, so i am in the workings on solving this issue before enabling larger files support.

2 credits - 1 file up to 30 functions. 3 credits - 1 file 30-35 functions.

P.s if generated tests have <70% pass rate (at which point probably something went horribly wrong, your credits are refunded)

Hope this answer clears things up!

joshuaisaact•1d ago
I notice one of the things you don't really talk about in the blog post (or if you did, I missed it) is unnecessary tests, which is one of the key problems LLMs have with test writing.

In my experience, if you just ask an LLM to write tests, it'll write you a ton of boilerplate happy path tests that aren't wrong, per se, they're just pointless (one fun one in react is 'the component renders').

How do you plan to handle this?

bulba4aur•1d ago
I actually though about it multiple times over at this point.

You're right, this deserves more attention, and is a valid problem going forward with this app. And I had this problem when just started building, it either generated XSS tests for any user input validation method (even if it used other validators) or just 1 single test case.

For now I attempt to strictly limit the amount of tests for LLM to generate.

This is achieved with "Planner" that plans the tests for each function before any generation happens, that agent is instructed to generate a plan that follows the criteria:

- testCases.category MUST be one of "happy_path" | "edge_case" | "error_handling" | "boundary".

And it is asked to generate 2-3 tests for each category. While this may result in the unnecessary tests, it at least tries to limit the amount of them.

Going forward I believe the best approach would be to tune and tweak the requirements based on the language/framework it detects.

observationist•1d ago
Do a structured code review, with a few passes by Claude or Codex. Have it provide an annotated justification for each test, and flag tests with redundant, low, or no utility within the context of the rest of the tests. Anything that looks questionable to you, call it out on the next pass, and if it's not justified by the time you fully understand the tests, nuke it.

You could automate this, but you'll end up getting rid of useful tests and keeping weird useless ones until the AI gets better at nuance and large codebases.

OptionOfT•21h ago
What I see a lot is a generated test for something I prompt, and the test passes. Then I manually break the test and it fails for a different reason, not what I wanted to verify.

Guess I need to make it generate negative tests?

aleksiy123•18h ago
The automated version of this is mutation testing.

Which is actually probably a solid idea for this exact use case.

rcarmo•22h ago
Weird. Copilot knows what tests are and only "fixes" them after we've refactored the relevant code.

I really wonder if Claude Code and other agents keep track of these dependencies at all (I know that VS Code exposes its internal testing tools to agents, and use Anthropic and OpenAI tools with them).

bulba4aur•5h ago
Indeed, the Microsoft Copilot eco-system might be a bit more sophisticated these days.

It so just happens than people around me, including myself, don't use the copilot, we "left" for the next big thing when Cursor was release, and copilot was still a glorified auto-complete.

From your feedback it seems like they became quite good?