frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ghostty is leaving GitHub

https://mitchellh.com/writing/ghostty-leaving-github
696•WadeGrimridge•1h ago•171 comments

OpenAI models coming to Amazon Bedrock: Interview with OpenAI and AWS CEOs

https://stratechery.com/2026/an-interview-with-openai-ceo-sam-altman-and-aws-ceo-matt-garman-abou...
78•translocator•1h ago•23 comments

A playable DOOM MCP app

https://chrisnager.com/blog/doom-runs-in-chatgpt-and-claude/
57•chrisnager•1h ago•22 comments

Warp is now Open-Source

https://github.com/warpdotdev/warp
102•doppp•3h ago•37 comments

CJIT: C, Just in Time

https://dyne.org/cjit/
38•smartmic•1h ago•4 comments

Your phone is about to stop being yours

https://keepandroidopen.org/en/
612•doener•5h ago•328 comments

Waymo in Portland

https://waymo.com/blog/shorts/waymo-in-portland/
179•xnx•2h ago•217 comments

Claude.ai unavailable and elevated errors on the API

https://status.claude.com/incidents/9l93x2ht4s5w
215•shorsher•2h ago•176 comments

GitHub RCE Vulnerability: CVE-2026-3854 Breakdown

https://www.wiz.io/blog/github-rce-vulnerability-cve-2026-3854
141•bo0tzz•4h ago•38 comments

Intel Arc Pro B70 Review

https://www.pugetsystems.com/labs/articles/intel-arc-pro-b70-review/
15•zdw•4d ago•0 comments

Patch applies fake diffs from commit messages

https://samizdat.dev/phantom-patch/
41•reconquestio•1d ago•8 comments

I have officially retired from Emacs

https://nullprogram.com/blog/2026/04/26/
139•Fudgel•2d ago•82 comments

Show HN: Drive any macOS app in the background without stealing the cursor

https://github.com/trycua/cua
10•frabonacci•4h ago•3 comments

Infisical (YC W23) Is Hiring Full Stack Software Engineers (Remote)

https://jobs.ashbyhq.com/infisical/782b9da8-20e1-48b2-919e-6c5430c58628
1•vmatsiiako•3h ago

Localsend: An open-source cross-platform alternative to AirDrop

https://github.com/localsend/localsend
688•bilsbie•8h ago•217 comments

UAE to leave OPEC

https://www.ft.com/content/8c354f2d-3e66-47f1-aad4-9b4aa30e386d
260•bazzmt•7h ago•372 comments

VibeVoice: Open-source frontier voice AI

https://github.com/microsoft/VibeVoice
290•tosh•8h ago•162 comments

AISLE Discovers 38 CVEs in OpenEMR Healthcare Software

https://aisle.com/blog/aisle-discovers-38-critical-security-vulnerabilities-in-healthcare-softwar...
157•mmsc•4h ago•100 comments

Show HN: Live Sun and Moon Dashboard with NASA Footage

https://www.lumara-space.app/
142•beeswaxpat•7h ago•45 comments

I Won a Championship That Doesn't Exist

https://ron.stoner.com/How_I_Won_a_Championship_That_Doesnt_Exist/
3•SEJeff•14m ago•0 comments

Laguna XS.2 and M.1

https://poolside.ai/blog/laguna-a-deeper-dive
81•tosh•4h ago•36 comments

Warp is now open-source

https://www.warp.dev/blog/warp-is-now-open-source
86•meetpateltech•4h ago•32 comments

GitHub Actions is the weakest link

https://nesbitt.io/2026/04/28/github-actions-is-the-weakest-link.html
172•dochtman•8h ago•60 comments

Talkie: a 13B vintage language model from 1930

https://talkie-lm.com/introducing-talkie
615•jekude•22h ago•246 comments

GitHub Copilot code review will start consuming GitHub Actions minutes

https://github.blog/changelog/2026-04-27-github-copilot-code-review-will-start-consuming-github-a...
211•whtsky•11h ago•147 comments

Things C++26 define_static_array can't do

https://quuxplusone.github.io/blog/2026/04/24/define-static-array/
36•jandeboevrie•2d ago•14 comments

ASML became the chokepoint for cutting-edge chips

https://worksinprogress.co/issue/the-worlds-most-complex-machine/
299•mellosouls•3d ago•181 comments

A good AGENTS.md is a model upgrade. A bad one is worse than no docs at all

https://www.augmentcode.com/blog/how-to-write-good-agents-dot-md-files
57•gmays•2h ago•17 comments

FCC Funding Application Notes Paramount Will Be 49.5% Foreign-Owned Post-Merger

https://deadline.com/2026/04/paramount-fcc-request-wbd-merger-middle-east-1236873732/
179•throw0101c•4h ago•116 comments

Deep under Antarctic ice, a long-predicted cosmic whisper breaks through

https://phys.org/news/2026-04-deep-antarctic-ice-cosmic-strange.html
102•rbanffy•1d ago•40 comments
Open in hackernews

A good AGENTS.md is a model upgrade. A bad one is worse than no docs at all

https://www.augmentcode.com/blog/how-to-write-good-agents-dot-md-files
57•gmays•2h ago

Comments

rgbrgb•1h ago
I'd guess the same has always been true for READMEs / human dev docs. Of course it doesn't transfer directly but still feels incredible to be in an age where we can measure such (previously) theoretical things with synthetic programmers.
forgotusername6•1h ago
Interesting that they had a 100% read rate of agents.md. In my test repo lower down agents.md files were occasionally missed by vscode copilot. That fact put me off putting too much effort into nesting agents.md files too much within the repo and I've been focusing on agent skills instead.
weiliddat•33m ago
This is more a harness thing signaling the presence or forcing a read on AGENTS/CLAUDE.md right?
stavros•2m ago
Yes it is, the main feature that differentiates AGENTS.md from other files is that the former is usually loaded into the context automatically.
lelandbatey•9m ago
The 100% read rate is very harness/CLI dependent. The "original" idea for AGENTS.md was: the AGENTS.md file will be included as-is in the system prompt by the harness, so the agent doesn't have any choice in whether it'll be read or not. For example, this is a shortened form of what opencode sends as a system prompt for a new session when interacting with a provider (displayed in YAML for formatting, and edited for formatting):

    model: foo-model
    max_tokens: 32000
    top_p: 1
    messages:
      - role: system
        content: |
          You are opencode, an interactive CLI tool that helps users with software engineering tasks.
          Use the instructions below and the tools available to you
          # ... snip ...
          Here is some useful information about the environment you are running in:
          <env>
            Working directory: /home/user/dir
            Workspace root folder: /
            Is directory a git repo: no
            Platform: linux
            Today's date: Tue Apr 28 2026
          </env>
          Skills provide specialized instructions and workflows for specific tasks.
          Use the skill tool to load a skill when a task matches its description.
          No skills are currently available.
          Instructions from: /home/user/dir/AGENTS.md
          # Overview
          This directory holds the entirety of the code for the <dayjob> company. All code lives in Github
          under the `<dayjob>` organization, and beneath that Organization is a wide-and-flat set of all
          the Git repositories of all source code at <dayjob>. That Github repo structure is replicated in
          this directory via `ghorg`.
My AGENTS.md file contents start at the "# Overview" line.

Notice that the harness is just unceremoniously dumping the AGENTS.md file into the exact same text stream as the system prompt, barely contextualizing that hey, starting now, this text is from AGENTS.md and not from the harness.

If you want AGENTS.md to work (likewise, if you want skills or anything else to work) you have to know how the harness is handling/feeding them to the LLM, because no LLM will reliably look on their own.

verdverm•1h ago
IME, multiple (good) AGENTS.md is even better. I mostly see them only at the root of a repository, but I spread more out into important subdirectories. They act as a table of contents and spark notes. Putting more focussed AGENTS.md in important places has been even more helpful.

Bonus points if you can force them into context without needing the agent to make a tool call, based on touching the files or systems near them. (my homegrown agent has this feature)

themafia•56m ago
The models are so terrible you have to think ahead of them so they don't make mistakes. This is not an upgrade. This is coping behavior.
readitalready•21m ago
That's like saying "the programmers are so terrible you have to think ahead of them so they don't make mistakes".
Rekindle8090•17m ago
No it's not actually anything like that whatsoever. Programmers are objectively, infinitely more capable than llms. Stop anthropomorphizing algorithms.
avereveard•16m ago
eh, good programmer are goal oriented, today SOTA models still need for the most part step by step guidance, so there's a gap still.

the AGENTS.md pieces that pin specific tool-call shapes or force chain-of-thought before action are coping that ages out, same lifecycle as the retry-with-different-prompt loops or chains of thought prompt most stacks shipped in 2024 to compensate for brittle instruction-following.

not quite there yet, but it's nice to see them being shorter and shorter as model release until all the basic are peeled out by the march of progress and one day only the invariants will be left there

httpdemon•14m ago
This is like saying programmers are so terrible that you have to think ahead of them and document your code/project so devs don’t make mistakes and anyone who thinks README files are a good thing are coping.
weiliddat•25m ago
I suspect the harness (of which AGENTS and skills and similar things) should be abstracted for better overall performance. This article doesn't really go into detail about model preferences, but some other benchmarks show that different models have differnt preferences of how to use certain tools (probably related to their post training material), and it should really be managed invisibly to me as the end user.

Also curious how well LLMs can self-reflect in a loop, in terms of, here's how the previous iteration went, here's what didn't go well, here's feedback from the human, how do I modify the docs I use in a way that I know I'll do better next time.

I know you can somewhat hillclimb via DSPy but that's hard to generalize.

chickensong•9m ago
Claude self-reflects and updates based on feedback pretty well these days, but seems to lean on memory more than updating CLAUDE.md. I don't know how well it adheres to memory, but it seems to work sometimes. I don't like how the memory is stored outside of the project directory though.
weiliddat•5m ago
Hmm I would hope that's for better quality (if there's somehow model-specific optimizations) or search/retrieval methods down the line. But can't help but feel like the labs/providers might try to lock-in customers by making things non-portable/opaque.
chickensong•1m ago
Oh yeah, it definitely feels like a scramble to add lock-in features.
chickensong•16m ago
It's cool that they did some measurements, but unfortunately there's not much to learn from the article unless you're using really outdated files that you wrote by hand. The agent should know how to write a good file.

For existing files, the agent will carry on a bad structure unless you specifically ask it to refactor and think about what's actually helpful.

In general, it should be a lean file that tells the agent how to work with the project (short description, table of commands, index of key docs, supporting infra, handful of high-level rules and conventions that apply to everything). Occasionally ask the agent to review and optimize the file, particularly after model upgrades.

acgourley•2m ago
Everytime I've asked a model to write it's Agents/Claude file it's been pretty bad actually, are you sure writing these files is actually in distribution right now?