frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The <output> Tag

https://denodell.com/blog/html-best-kept-secret-output-tag
666•todsacerdoti•13h ago•152 comments

Microsoft only lets you opt out of AI photo scanning 3x a year

https://hardware.slashdot.org/story/25/10/11/0238213/microsofts-onedrive-begins-testing-face-reco...
162•dmitrygr•2h ago•53 comments

Testing two 18 TB white label SATA hard drives from datablocks.dev

https://ounapuu.ee/posts/2025/10/06/datablocks-white-label-drives/
101•thomasjb•5d ago•57 comments

How Apple designs a virtual knob (2012)

https://jherrm.github.io/knobs/
68•gregsadetsky•4d ago•43 comments

GNU Health

https://www.gnuhealth.org/about-us.html
281•smartmic•5h ago•78 comments

Vibing a non-trivial Ghostty feature

https://mitchellh.com/writing/non-trivial-vibing
167•skevy•7h ago•88 comments

Superpowers: How I'm using coding agents in October 2025

https://blog.fsck.com/2025/10/09/superpowers/
220•Ch00k•14h ago•138 comments

AMD and Sony's PS6 chipset aims to rethink the current graphics pipeline

https://arstechnica.com/gaming/2025/10/amd-and-sony-tease-new-chip-architecture-ahead-of-playstat...
267•zdw•16h ago•326 comments

The World Trade Center under construction through photos, 1966-1979

https://rarehistoricalphotos.com/twin-towers-construction-photographs/
158•kinderjaje•4d ago•78 comments

Windows Subsystem for FreeBSD

https://github.com/BalajeS/WSL-For-FreeBSD
184•rguiscard•14h ago•64 comments

Microsoft Amplifier

https://github.com/microsoft/amplifier
178•JDEW•6h ago•113 comments

Building a JavaScript Runtime from Scratch using C

https://devlogs.xyz/blog/building-a-javaScript-runtime
52•redbell•3d ago•19 comments

People regret buying Amazon smart displays after being bombarded with ads

https://arstechnica.com/gadgets/2025/10/people-regret-buying-amazon-smart-displays-after-being-bo...
65•croes•3h ago•26 comments

A quiet change to RSA

https://www.johndcook.com/blog/2025/10/06/a-quiet-change-to-rsa/
74•ibobev•5d ago•25 comments

Indonesia says 22 plants in industrial zone contaminated by caesium 137

https://www.reuters.com/sustainability/boards-policy-regulation/indonesia-says-22-plants-industri...
18•geox•1h ago•7 comments

All-New Next Gen of UniFi Storage

https://blog.ui.com/article/all-new-next-gen-of-unifi-storage
20•ycombinete•3d ago•13 comments

I built physical album cards with NFC tags to teach my son music discovery

https://fulghum.io/album-cards
532•jordanf•1d ago•183 comments

Wilson's Algorithm

https://cruzgodar.com/applets/wilsons-algorithm/
30•FromTheArchives•8h ago•6 comments

Rating 26 years of Java changes

https://neilmadden.blog/2025/09/12/rating-26-years-of-java-changes/
68•PaulHoule•3h ago•61 comments

How to check for overlapping intervals

https://zayenz.se/blog/post/how-to-check-for-overlapping-intervals/
70•birdculture•6h ago•20 comments

Crypto-Current (2021)

https://zerophilosophy.substack.com/p/crypto-current
11•keepamovin•5d ago•3 comments

(Re)Introducing the Pebble Appstore

https://ericmigi.com/blog/re-introducing-the-pebble-appstore/
249•duck•23h ago•47 comments

Otary now includes 17 image binarization methods

https://alexandrepoupeau.com/otary/api/image/transformers/thresholding/
4•poupeaua•4d ago•2 comments

How hard do you have to hit a chicken to cook it? (2020)

https://james-simon.github.io/blog/chicken-cooking/
163•jxmorris12•19h ago•95 comments

A Library for Fish Sounds

https://nautil.us/a-library-for-fish-sounds-1239697/
30•pistolpete5•4d ago•4 comments

Diane Keaton, a Star of 'Annie Hall' and 'First Wives Club,' Dies at 79

https://www.nytimes.com/2025/10/11/movies/diane-keaton-dead.html
6•mhb•45m ago•0 comments

Discord hack shows risks of online age checks

https://news.sky.com/story/discord-hack-shows-dangers-of-online-age-checks-as-internet-policing-h...
130•ColinWright•2h ago•41 comments

Tangled, a Git collaboration platform built on atproto

https://blog.tangled.org/intro
295•mjbellantoni•1d ago•81 comments

Daniel Kahneman opted for assisted suicide in Switzerland

https://www.bluewin.ch/en/entertainment/nobel-prize-winner-opts-for-suicide-in-switzerland-261946...
468•kvam•13h ago•434 comments

Programming in the Sun: A Year with the Daylight Computer

https://wickstrom.tech/2025-10-10-programming-in-the-sun-a-year-with-the-daylight-computer.html
161•ghuntley•21h ago•52 comments
Open in hackernews

Microsoft Amplifier

https://github.com/microsoft/amplifier
177•JDEW•6h ago

Comments

ukFxqnLa2sBSBf6•4h ago
[flagged]
dang•3h ago
Please don't post like this to this site. It's against the guidelines (https://news.ycombinator.com/newsguidelines.html) because we're trying for something else here.
ukFxqnLa2sBSBf6•58m ago
My apologies, won’t happen again
hansmayer•4h ago
>"Amplifier is a complete development environment that takes AI coding assistants and supercharges them with discovered patterns, specialized expertise, and powerful automation — turning a helpful assistant into a force multiplier that can deliver complex solutions with minimal hand-holding."

Again this "supercharging" nonsense? Maybe in Satiyas confabulated AI-powered universe, but not in the real world I am afraid...

zb3•4h ago
README files in the "ai_context" directory provide the ultimate AI Slop reading experience..
qsort•4h ago
Yeah, I'm not even that opposed to using AI for documentation if it helps, but everything from Microsoft recently has been full-on slop. It's almost like they're trying to make sure you can't miss it's AI generated.
rectang•4h ago
"Eat your own dog slop" isn't bad practice, though.

Some people in the organization will experience the limitations and some will learn — although there are bound to be people elsewhere in the organization who have a vested interest in not learning anything and pushing the product regardless.

npalli•4h ago
Contributors

claude Claude

Interesting given Microsoft’s history with OpenAI

wiether•3h ago
History in AI is rewritten on a daily basis

https://techcrunch.com/2025/09/09/microsoft-to-lessen-relian...

mark212•1h ago
more than history -- early, massive investment in OpenAI by Microsoft and formerly their exclusive compute provider.

This stood out to me too, seems like a months-long project with heavy use of Claude

neuroelectron•4h ago
aka Winamp
bgwalter•4h ago
Can we get Windows 7 back instead? Nadella rode the cloud wave in an easy upmarket, his "AI" obsession will fail. No one wants this.

The Austrian army already switched to LibreOffice for security reasons, we don't need another spyware and code stealing tool.

SilverElfin•4h ago
> Nadella rode the cloud wave in an easy upmarket

I would say it’s more the result of anti competitive bundling of cloud things into existing enterprise contracts rather than the wave. Microsoft is far worse than it ever was in the 90s but there’s no semblance of antitrust action in America.

falcor84•4h ago
> No one wants this

There are many many people who want better AI coding tools, myself included. It might or might not fail, but there is a clear and strong opportunity here, that it would be foolish of any large tech company to not pursue.

bgwalter•3h ago
Their own employees have to be surveilled and coerced to use their own dog food:

https://news.ycombinator.com/item?id=45540174

jug•4h ago
I'll always be skeptical about using AI to amplify AI. I think humans are needed to amplify AI since humans are so far documented to be significantly more creative and proactive in pushing the frontier than AI. I know, it's maybe a radical concept to digest.
dr_dshiv•4h ago
Based on clear, operational definitions, AI is definitely more creative than humans. E.g., can easily produce higher scores on a Torrance test of divergent thinking. Humans may still be more innovative (defined as creativity adopted into larger systems), though that may be changing.
qlm•4h ago
This is absurd to the point of being comical. Do you really believe that?

If an “objective” test purports to show that AI is more creative than humans then I’m sorry but the test is deeply flawed. I don’t even need to look at the methodology to confidently state that.

hansmayer•4h ago
More creative? I've just seen my premium subscription "AI" struggling to find a trivial issue of a missing import in a very small / toy project. Maybe these tools are getting all sorts of scores on all sorts of benchmarks, I dont doubt it, but why are there no significant real-world results after more than 3 years of hype? It reminds of that situation when the geniuses at Google offered the job to the guy who created Homebrew and then rejected him after he supposedly did not do well on one of those algorithmic tasks (inverting a binary tree? - not sure if I remember correctly). There are also all sorts of people scoring super high on various IQ tests, but what counts, with humans as with the supposed AI is the real world results. Benchmarks without results do not mean anything.
vachina•3h ago
It is as creative as it's training material.

You think it is creative because you lack the knowledge of what it has learnt.

jsheard•4h ago
> I'll always be skeptical about using AI to amplify AI.

This project was in part written by Claude, so for better or worse I think we're at least 3 levels deep here (AI-written code which directs an AI to direct other AIs to write code).

Balinares•2h ago
I think I'm more optimistic about this than brute-forcing model training with ever larger datasets, myself. Here's why.

Most models I've benchmarked, even the expensive proprietary models, tend to lose coherence when the context grows beyond a certain size. The thing is, they typically do not need the entire context to perform whatever step of the process is currently going on.

And there appears to be a lot of experimentation going on along the line of having subagents in charge of curating the long term view of the context to feed more focused work items to other subagents, and I find that genuinely intriguing.

My hope is that this approach will eventually become refined enough that we'll get dependable capability out of cheap open weight models. That might come in darn handy, depending on the blast radius of the bubble burst.

tcdent•4h ago
> Never lose context again. Amplifier automatically exports your entire conversation before compaction, preserving all the details that would otherwise be lost. When Claude Code compacts your conversation to stay within token limits, you can instantly restore the full history.

If this is restoring the entire context (and looking at the source code, it seems like it is just reloading the entire context) how does this not result in an infinite compaction loop?

chews•4h ago
Billions in investment into OpenAI and this is a wrapper for Claude API usage. This is very much a microsoft product.
rco8786•4h ago
A lot of snark in these comments. Has anyone actually tried it yet?
SilverElfin•4h ago
I’ve seen people discuss these types of approaches on X. To me it looks like the concepts here are already tried and popular - they’re just packaging it up so that people who aren’t as deep in that world can get the same benefits. But I’m not an expert.
ridruejo•4h ago
Exactly. I don’t understand the cynicism in the comments and they literally are just trying to make the technology more accessible
nozzlegear•4h ago
That's a very altruistic outlook on Microsoft's intent with getting everyone to use and depend on AI.
vachina•3h ago
Microsoft is on a roll, on a roll at repackaging open source efforts and branding them, and then saying they made it.
otterley•2h ago
Isn’t that what every company that sells technology does—build demos and showcase uses in order to provoke the imagination and motivate sales? No company is perfect, but what Microsoft is doing here is hardly unusual.
ramraj07•4h ago
I have two hypotheses:

1. It affects the fundamental ego of these engineers that a computer can do what they thought only they could do and what they thought made them better than the rest of the population. They might not realize this of course.

2. AI and all these AI systems are intelligence multipliers, with a zero around IQ 100. Zero multiplied by zero is zero, and negative multiplier just leads to garbage. So the people who say "I used AI and its garbage" should really think hard about what it says about them. I thought I was crazy to think of this hypothesis but someone else also mentioned the exact statement and I didnt think I was just being especially mean anymore.

Angostura•4h ago
You seem to be assuming that the negative multiplier is on the human side of the equation. There’s your mistake
milutinovici•3h ago
Alternative hypothesis is that you work on trivial problems, and therefore you get a lot of help from LLMs. Have you considered this?
ramraj07•3h ago
Im definitely not creating the next StuxNet for sure. So Ill bow down to whoever is writing the next C compiler I suppose.
hansmayer•3h ago
Nothing to do with ego, but you may want to check your own projections, you know how when you speak of the others, you mainly speak of yourself (Jung or Freud, not sure). No need to be bitter about not having the grind and focus to become an engineer yourself, it is after all much harder than say, earning an MBA and you should be OK with whatever you turned out to be. Not to mention that the tools themselves, were in fact built by engineers and not by the "rest of the population", like yourself. Now having said that, I am early adopter myself, was happy to pay the premium costs for my entire company, if the tool was any kind of amplifier. But the crap just does not work. Recently the quality is degrading so much that we simply reduced it to using it for simple consultation - and we only do it because unfortunately the search has been ruined. Otherwise most of the folks both internally and externally that I know using these tools would be happy to just go back to google search and SO. Unfortunately that's not an option. Also see if your second argument makes any sense at all. Maybe it comes out of a lacking math background? Firstly, you don't need two zeroes to get a zero out at the end of the multiplication. And secondly, if an average engineer is a zero, what are folks like you then? But again, it maybe just your own projections...
ramraj07•3h ago
For some reason youre assuming Im not an engineer which is funny and revealing.

I am an engineer and my vibe coded prototype is now in production, one of the best applications of its type in the industry, and doing really well. So well, I have a pretty large team working on it now. This project was and still is 95% written by AI. No complaints, never going back. That's my experience.

Clearly the eng community is splitting into two categories, people who think this is all never going to work and people who think otherwise. Time will tell who's right.

To anyone else reading and thinking closer to the second side, we're hiring :)

bgwalter•3h ago
Which company so we can avoid it?
hansmayer•3h ago
Hey no need to prove yourself to a stranger on the Internet. I'll take your word for it, including your "pretty large team working on it", which for some reason is necessary, although you have "vibe coded 95%" of your application. So if you were to be taken by your word, the LLMs are fantastic and you can do 95% production-ready on your own, just using the LLMs, but for some reason, you'll still need a "pretty large team" to work on it afterwards. Yeah, that sounds very consistent with your main line. Also feel free to share your company and product name, so we can avoid it - thanks.
LtWorf•2h ago
phd in medical engineering doesn't scream "computer science expert" to me.
Keyframe•1h ago
trust me bro vibes
bgwalter•3h ago
This is like a person who thinks that making a photocopy of an Einstein paper makes him Einstein. You know, Einstein wasn't that special after all and the photocopier affects his fundamental ego.
hansmayer•4h ago
I think most of us are irritated by the constant A/B Testing and underwhelming releases. Lets just have the bubble pop so we can solve real problems instead of this.
fishmicrowaver•3h ago
Hehe suddenly many people will have the real problem of paying bills unfortunately
hansmayer•3h ago
They will, especially when it comes to paying back the VCs all the burnt GenAI-dollars
bee_rider•59m ago
I thought VCs were investors.

Generally when your investment fails you don’t get paid back, right?

rs186•2h ago
The repo is full of big AI words without any metrics/benchmark.

People are correct to question it.

If anything, Microsoft needs to show something meaningful to make people believe it's worth trying it out.

rs186•4h ago
> This project is a research demonstrator.

I went through the whole README but couldn't find any hint about this actually making existing agents more effective/accurate or make fewer mistakes.

The whole thing, at the very least, throws the simplicity of existing tools out of the window, and make everything 20x complicated.

I seriously doubt if Microsoft has any objective metrics (benchmarks) -- even cherry-picked ones -- to show this project is not a complete waste of time.

btw the whole repo looks like it's AI generated slop.

[To cowards who downvoted me without leaving a comment: show me some numbers and prove me wrong.]

alganet•4h ago
> "I have more ideas than time to try them out" — The problem we're solving

I see a possible paradox here.

For exploration, my goal is _to learn_. Trying out multiple things is not wasting time, it's an intensive learning experience. It's not about finding what works fast, but understanding why the thing that works best works best. I want to go through it. Maybe that's just me though, and most people just want to get it done quickly.

vincnetas•4h ago
Starting in Claude bypass mode does not give me confidence:

WARNING: Claude Code running in Bypass Permissions mode │ │ │ │ In Bypass Permissions mode, Claude Code will not ask for your approval before running potentially dangerous commands. │ │ This mode should only be used in a sandboxed container/VM that has restricted internet access and can easily be restored if damaged.

nine_k•3h ago
The Readme clearly states:

Caution

This project is a research demonstrator. It is in early development and may change significantly. Using permissive AI tools in your repository requires careful attention to security considerations and careful human supervision, and even then things can still go wrong. Use it with caution, and at your own risk.

vincnetas•2h ago
Claude Code will not ask for your approval before running potentially dangerous commands.

and

requires careful attention to security considerations and careful human supervision

is a bit orthogonal no?

otterley•2h ago
It’s not orthogonal at all. On the contrary, it’s directly related:

“Using permissive AI tools [that is, ones that do not ask for your approval] in your repository requires careful attention to security considerations and careful human supervision”. Supervision isn’t necessarily approving every action: it might be as simple as inspecting the work after it’s done. And security considerations might mean to perform the work in a sandbox where it can’t impact anything of value.

nine_k•1h ago
As a token of careful attention, run this in a clean VM, properly firewalled not to access the host, your internal network, GitHub or wherever your valuable code lives, and ideally anything but the relevant Anthropic and Microsoft API endpoints.
thethimble•1h ago
And even then if you give it Internet access you're at risk of code exfiltration attacks.
cyral•3h ago
If they didn't have this warning you'd see comments on how irresponsible they are being
nicwolff•2h ago
I assumed, especially with the VS Code recommendation, that this would automatically use devcontainers...
koakuma-chan•4h ago
Is this is a Claude Code wrapper?
skrebbel•4h ago
Yes
xorgun•4h ago
Is this going to be another HN dropbox moment?
furyofantares•4h ago
I do a lot of work with claude code and codex cli but frankly as soon as I see all the LLM-tells in the readme, and then all the commit messages written by claude, I immediately don't want to read the readme or try the project until someone else recommends it to me.

This is gaining stars and forks but I don't know if that's just because it's under the github.com/microsoft, and I don't really know how much that means.

nightshift1•3h ago
Future LLMs are going to be trained on this. Github really ought to start tagging repos that are vibe-coded.
typpilol•2h ago
I'd rather have in-depth commit messages then three word ones
furyofantares•2h ago
When I blind-commit claude code commit messages they are sometimes totally wrong. Not even hallucinations necessarily - by the time I'm committing the context may be large and confusing, or some context lost.

I'd rather have the three word message than detailed but wrong messages.

I think I agree with you anyway on average. Most of the time a claude-authored commit message is better than a garbage message.

But it's still a red flag that the project may be filled with holes and not really ready for other people. It's just so easy to vibe your way to a project that works for you but is buggy and missing tons of features for anyone who strays from your use case.

typpilol•1h ago
You're not wrong.

I'd never encourage anyone to blind commit the messages But if they are correct they seem a lot more useful than 90% of commit messages.

I found the biggest mistakes that I've seen other people do are like - they move a file, and the commit message acts like it's a brand new feature they added because the llm doesn't put it together it's just a moved file

nightshift1•3h ago
I think that letting an LLM run unsupervised on a task is a good way to waste time and tokens. You need to catch them before they stray too far off-path. I stopped using subagents in Claude because I wasn't able to see what they were doing and intervene. Indirectly asking an LLM to prompt another LLM to work on a long, multi-step task doesn't seem like a good idea to me. I think community efforts should go toward making LLMs more deterministic with the help of good old-fashioned software tooling instead of role-playing and writing prayers to the LLM god.
hu3•2h ago
Yeah in my experience, LLMs are great but they still need babysitting lest they add 20k lines of code that could have been 2k.
danmaz74•2h ago
When the task is bigger than I trust the agent to work on it on its own, or for me to review the results, I ask it to create a plan with steps. Then create a md file for each step. I review the steps, and ask the agent to implement the first one. Review that one, fix it, then ask it to update the next steps, and then implement the next one. And so on, until finished.
thethimble•1h ago
Separately, you have to consider that "wasting tokens spinning" might be acceptable if you're able to run hundreds of thousands of these things in parallel. If even a small subset of them translate to value, then you're far net ahead vs with a strictly manual/human process.
spike021•20m ago
this plus a reset in between steps usually helps focus context in my experience
anditherobot•17m ago
Have you tried Scoped context packages? Basically for each task, I create a .md file that includes relevant file paths, the purpose of the task, key dependencies, a clear plan of action, and a test strategy. It’s like a mini local design doc. I found that it helps ground implementation and stabilizes the output of the agents.
tummler•2h ago
I also use AI to do discrete, well-defined tasks so I can keep an eye on things before they go astray.

But I thought there are lots of agentic systems that loop back and ask for approval every few steps, or after every agent does its piece. Is that not the case?

lpcvoid•3h ago
[flagged]
dang•3h ago
Ok, but please don't post shallow dismissals of other people's work to HN. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
lpcvoid•1h ago
You're right, my comment was not good.
estimator7292•2h ago
The very first line in the readme is a quote, attributed to "the problem we're solving".

That's cute

nvader•33m ago
If you think about it, that's because "the problem we're solving" is running out of time. Once it's solved it won't be able to try out ideas.
theusus•2h ago
Didn’t GitHub create something similar called Spec.
janpio•1h ago
You are thinking of https://github.com/github/spec-kit
CuriouslyC•1h ago
A lot of the ideas in this aren't bad, but in general it's hacky. Context export? Just use industry standard observability! This is so bad it makes me cringe. Parallel worktrees? These are prone to putting your repo in bad states when you run a lot of agents, and you have to deal with security, just put your agent in a container and have it clone the repo. Everything this project does it's doing the wrong way.

I have a repo that shows you how to do this stuff the correct way that's very easy to adapt, along with a detailed explanation, just do yourself a favor, skip the amateur hour re-implementations and instrument/silo your agents properly: https://sibylline.dev/articles/2025-10-04-hacking-claude-cod...

stillsut•1h ago
I've actually written my own a homebrew framework like this which is a.) cli-coder agnostic and b.) leans heavily on git worktrees [0].

The secret weapon to this approach is asking for 2-4 solutions to your prompt running in parallel. This helps avoid the most time consuming aspect of ai-coding: reviewing a large commit, and ultimately finding the approach to the ai took is hopeless or requires major revision.

By generating multiple solutions, you can cutdown investing fully into the first solution and use clever ways to select from all the 2-4 candidate solutions and usually apply a small tweak at the end. Anyone else doing something like this?

[0]: https://github.com/sutt/agro

thethimble•1h ago
There is a related idea called "alloying" where the 2-4 candidate solutions are pursued in parallel with different models, yielding better results vs any single model. Very interesting ideas.

https://xbow.com/blog/alloy-agents

michaelbarton•1h ago
This reminds me of an an approach in mcmc where you run mutiple chains at different temperatures and then share the results between them (replica exchange MCMC sampling) the goal being not to get stuck in one “solution”
nopelynopington•1h ago
I was hoping this was going to be an awesome new music player, but no, everything new thing is AI now. Welcome to the future
willahmad•1h ago
Project looks interesting, but no demos. As much I want to try it because of all cool concepts mentioned, but I am not sure I want to invest my time if I don't see any demos
fishmicrowaver•55m ago
I mean that's fair but doing a make install and providing your API key is pretty easy?
lordofgibbons•30m ago
There are hundreds of these on github. Why should we care? Why not release any benchmarks or examples?