frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Kagi Translate

https://translate.kagi.com
1•microflash•31s ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•1m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
1•facundo_olano•3m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•3m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•3m ago•0 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
2•tartoran•4m ago•0 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•4m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•5m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
1•maxmoq•6m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•6m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•6m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•7m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•10m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•11m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•15m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•16m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•16m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•16m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•17m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•18m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•18m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•18m ago•0 comments

Bang bang he murdered math {the musical } (2024)

https://taylor.town/bang-bang
1•surprisetalk•18m ago•0 comments

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•21m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
2•geox•22m ago•1 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•23m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
2•fainir•26m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•26m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•29m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•33m ago•0 comments
Open in hackernews

Show HN: Duende: Web UX for guiding Gemini as it improves your source code

https://github.com/alefore/duende
8•afc•6mo ago
I wrote a simple web UX in Python/JavaScript that spawns a conversation with Google Gemini with MCP commands that let it work on a specific coding task that you specify: http://github.com/alefore/duende

The UX lets you observe the conversation and provide guidance (e.g. "don't implement Foo through Bar, that's suboptimal; instead …").

It supports a `--review` mode, where once the main conversation says "I'm done with the task", various "evaluation" conversations are spawned, each focusing on reviewing the change from a very specific angle (e.g. "does it introduce useless comments?"). In the future, I'm considering adding other workflows.

I've used it mostly to develop itself (I started with a super rudimentary manual implementation and then mostly used Duende to extend its own implementation) as well as (with moderate success) to add a few new features to [my C++ text editor](http://github.com/alefore/edge).

It's been a lot of fun. I'm still developing my intuitions for what works and what doesn't, but I've already had plenty of experiences where I've been WOW'ed by what LLMs can already accomplish (as well as, to be honest, plenty of very disappointing cases where it struggles significantly on tasks that I expected would have been trivial). It's a learning experience, seeing things like how to avoid hallucinations; developing an intuition for how much to break down large tasks into smaller ones; or knowing when to abort a conversation and restart it with an improved prompt (vs continue to steer it in the right direction).

I think investing up-front in setting up a good context (e.g., good validation logic, useful constant contexts, a good set of "review evaluators" that can push-back against sloppy code) can go a long way to increase the odds of success.

It has changed my perception on the applicability of AI for developing software. While I've reviewed all the outputs (and often rewrite parts of them manually), I'm already incorporating it into (some parts) of my development life-cycle.

I have many ideas for improvements, but figured I'd share this early and ask for feedback. Hopefully others find it interesting; would love to hear your thoughts.