frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Windows Subsystem for Linux is now open source

https://blogs.windows.com/windowsdeveloper/2025/05/19/the-windows-subsystem-for-linux-is-now-open-source/
865•pentagrama•5h ago•548 comments

Jules: An Asynchronous Coding Agent

https://jules.google/
36•travisennis•38m ago•2 comments

Zod 4

https://zod.dev/v4
494•bpierre•6h ago•171 comments

Claude Code SDK

https://docs.anthropic.com/en/docs/claude-code/sdk
166•sync•3h ago•88 comments

GitHub Copilot Coding Agent

https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/
220•net01•5h ago•129 comments

Launch HN: Better Auth (YC X25) – Authentication Framework for TypeScript

148•bekacru•7h ago•53 comments

Dominion Energy's NEM 2.0 Proposal: What It Means for Solar in Virginia

https://www.virtuesolar.com/2025/05/16/dominion-nem-2/
27•Vsolar•3d ago•11 comments

The forbidden railway: Vienna-Pyongyang (2008)

http://vienna-pyongyang.blogspot.com/2008/04/how-everything-began.html
62•1317•3h ago•15 comments

Run your GitHub Actions locally

https://github.com/nektos/act
95•flashblaze•3d ago•34 comments

Too Much Go Misdirection

https://flak.tedunangst.com/post/too-much-go-misdirection
126•todsacerdoti•6h ago•53 comments

Game theory illustrated by an animated cartoon game

https://ncase.me/trust/
121•felineflock•5h ago•20 comments

Remarks on AI from NZ

https://nealstephenson.substack.com/p/remarks-on-ai-from-nz
91•zdw•3d ago•42 comments

ClawPDF – Open-Source Virtual/Network PDF Printer with OCR and Image Support

https://github.com/clawsoftware/clawPDF
156•miles•9h ago•23 comments

Show HN: Windows 98 themed website in 1 HTML file for my post punk band

https://corp.band
133•jealousgelatin•4h ago•28 comments

European Investment Bank to inject €70B in European tech

https://ioplus.nl/en/posts/european-investment-bank-to-inject-70-billion-in-european-tech
226•saubeidl•5h ago•229 comments

Glasskube (YC S24) is hiring in Vienna to build Open Source deployment tools

https://www.ycombinator.com/companies/glasskube/jobs/wjB77iZ-founding-engineer-go-typescript-kubernetes-docker
1•pmig•4h ago

Microsoft's ICC blockade: digital dependence comes at a cost

https://www.techzine.eu/news/privacy-compliance/131536/microsofts-icc-blockade-digital-dependence-comes-at-a-cost/
168•bramhaag•3h ago•71 comments

Show HN: A MCP server to evaluate Python code in WASM VM using RustPython

https://github.com/tuananh/hyper-mcp/tree/main/examples/plugins/eval-py
7•tuananh•2d ago•2 comments

InventWood is about to mass-produce wood that's stronger than steel

https://techcrunch.com/2025/05/12/inventwood-is-about-to-mass-produce-wood-thats-stronger-than-steel/
423•LorenDB•1d ago•399 comments

Rivers

https://www.futilitycloset.com/2025/05/15/rivers/
38•surprisetalk•3d ago•3 comments

FCC Chair Brendan Carr is letting ISPs merge–as long as they end DEI programs

https://arstechnica.com/tech-policy/2025/05/fcc-chair-brendan-carr-is-letting-isps-merge-as-long-as-they-end-dei-programs/
35•rntn•1h ago•12 comments

Wikipedia's Most Translated Articles

https://sohom.dev/most-translated-articles-on-wikipedia/pretty.html
77•sohom_datta•5h ago•50 comments

Side projects I've built since 2009

https://naeemnur.com/side-projects/
226•naeemnur•12h ago•125 comments

Telum II at Hot Chips 2024: Mainframe with a Unique Caching Strategy

https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-mainframe-with-a-unique-caching-strategy
110•rbanffy•11h ago•49 comments

Dilbert creator Scott Adams says he will die soon from same cancer as Joe Biden

https://www.thewrap.com/dilbert-scott-adams-prostate-cancer-biden/
136•dale_huevo•4h ago•173 comments

Show HN: A native Hacker News reader with integrated todo/done tracking

https://github.com/haojiang99/hacker_news_reader
12•coolwulf•3h ago•6 comments

Diffusion Models Explained Simply

https://www.seangoedecke.com/diffusion-models-explained/
97•onnnon•8h ago•14 comments

Edit is now open source

https://devblogs.microsoft.com/commandline/edit-is-now-open-source/
150•ingve•5h ago•57 comments

23andMe Sells Gene-Testing Business to DNA Drug Maker Regeneron

https://www.bloomberg.com/news/articles/2025-05-19/23andme-sells-gene-testing-business-to-dna-drug-maker-regeneron
179•wslh•6h ago•101 comments

WireGuard-vanity-keygen: WireGuard vanity key generator

https://github.com/axllent/wireguard-vanity-keygen
9•simonpure•1h ago•1 comments
Open in hackernews

GitHub Copilot Coding Agent

https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/
220•net01•5h ago

Comments

r0ckarong•5h ago
Check in unreviewed slop straight into the codebase. Awesome.
postalrat•5h ago
Now developers can produce 20x the slop and refactor at 5x speed.
OutOfHere•4h ago
In my experience in VSCode, Claude 3.7 produced more unsolicited slop, whereas GPT-4.1 didn't. Claude aggressively paid attention to type compatibility. Each model would have its strengths.
olex•5h ago
> Once Copilot is done, it’ll tag you for review. You can ask Copilot to make changes by leaving comments in the pull request.

To me, this reads like it'll be a good junior and open up a PR with its changes, letting you (the issue author) review and merge. Of course, you can just hit "merge" without looking at the changes, but then it's kinda on you when unreviewed stuff ends up in main.

tmpz22•4h ago
A good junior has strong communication skills, humility, asks many good questions, has imagination, and a tremendous amount of human potential.
DeepYogurt•4h ago
Management: "Why aren't you going faster now that the AI generates all the code and we fired half the dev team?"
odiroot•5h ago
I'm waiting for the first unicorn that uses just vibe coding.
erikerikson•4h ago
I expect it to be a security nightmare
freeone3000•3h ago
And why would that matter?
timrogers•3h ago
Copilot pushes its work to a branch and creates a pull request, and then it's up to you to review its work, approve and merge.

Copilot literally can't push directly to the default branch - we don't give it the ability to do that - precisely because we believe that all AI-generated code (just like human generated code) should be carefully reviewed before it goes to production.

(Source: I'm the product lead for Copilot coding agent.)

muglug•5h ago
> Copilot excels at low-to-medium complexity tasks

Oh cool!

> in well-tested codebases

Oh ok never mind

abraham•4h ago
Have it write tests for everything and then you've got a well tested codebase.
eikenberry•3h ago
You forgot the /s
danielbln•3h ago
Caveat empor, I've seen some LLMs mock the living hell out of everything, to the point of not testing much of anything. Something to be aware of.
yen223•2h ago
I've seen too many human operators do that too. Definitely a problem to watch out for
throwaway12361•4h ago
In my experience it works well even without good testing, at least for greenfield projects. It just works best if there are already tests when creating updates and patches.
lukehoban•3h ago
As peer commenters have noted, coding agent can be really good at improving test coverage when needed.

But also as a slightly deeper observation - agentic coding tools really do benefit significantly from good test coverage. Tests are a way to “box in” the agent and allow it to check its work regularly. While they aren’t necessary for these tools to work, they can enable coding agents to accomplish a lot more on your behalf.

(I work on Copilot coding agent)

CSMastermind•3h ago
In my experience they write a lot of pointless tests that technically increase coverage while not actually adding much more value than a good type system/compiler would.

They also have a tendency to suppress errors instead of fixing them, especially when the right thing to do is throw an error on some edge case.

shepherdjerred•46m ago
You can tell the AI not to suppress errors
boomskats•5h ago
My buddy is at GH working on an adjacent project & he hasn't stopped talking about this for the last few days. I think I've been reminded to 'make sure I tune into the keynote on Monday' at least 8 times now.

I gave up trying to watch the stream after the third authentication timeout, but if I'd known it was this I'd maybe have tried a fourth time.

tmpz22•5h ago
I’m always hesitant to listen to the line coders on projects because they’re getting a heavy dose of the internal hype every day.

I’d love for this to blow past cursor. Will definitely tune in to see it.

dontlikeyoueith•4h ago
>I’m always hesitant to listen to the line coders on projects because they’re getting a heavy dose of the internal hype every day.

I'm senior enough that I get to frequently see the gap between what my dev team thinks of our work and what actual customers think.

As a result, I no longer care at all what developers (including myself on my own projects) think about the quality of the thing they've built.

unshavedyak•4h ago
What specific keynote are they referring to? I'm curious, but thus far my searches have failed
babelfish•4h ago
MS Build is today
throwaway12361•4h ago
Word of advice: just go to YouTube and skip the MS registration tax
jerpint•5h ago
These kinds of patterns allow compute to take much more time than a single chat since it is asynchronous by nature, which I think is necessary to get to working solutions on harder problems
lukehoban•3h ago
Yes. This is a really key part of why Copilot coding agent feels very different to use than Copilot agent mode in VS Code.

In coding agent, we encourage the agent to be very thorough in its work, and to take time to think deeply about the problem. It builds and tests code regularly to ensure it understands the impact of changes as it makes them, and stops and thinks regularly before taking action.

These choices would feel too “slow” in a synchronous IDE based experience, but feel natural in a “assign to a peer collaborator” UX. We lean into this to provide as rich of a problem solving agentic experience as possible.

(I’m working on Copilot coding agent)

Scene_Cast2•5h ago
I tried doing some vibe coding on a greenfield project (using gemini 2.5 pro + cline). On one hand - super impressive, a major productivity booster (even compared to using a non-integrated LLM chat interface).

I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions (putting things where they don't belong). Unfortunately, there's not that much self-retrospection on these aspects if you ask about the quality of the code or if there are any better ways of doing it. Of course, if you pick up that something is in the wrong spot and prompt better, they'll pick up on it immediately.

I also ended up blowing through $15 of LLM tokens in a single evening. (Previously, as a heavy LLM user including coding tasks, I was averaging maybe $20 a month.)

falcor84•4h ago
> LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt

I wonder if the next phase would be the rise of (AI-driven?) "linters" that check that the implementation matches the architecture definition.

dontlikeyoueith•4h ago
And now we've come full circle back to UML-based code generation.

Everything old is new again!

candiddevmike•4h ago
> I also ended up blowing through $15 of LLM tokens in a single evening.

This is a feature, not a bug. LLMs are going to be the next "OMG my AWS bill" phenomenon.

Scene_Cast2•4h ago
Cline very visibly displays the ongoing cost of the task. Light edits are about 10 cents, and heavy stuff can run a couple of bucks. It's just that the tab accumulates faster than I expect.
PretzelPirate•4h ago
> Cline very visibly displays the ongoing cost of the task

LLMs are now being positioned as "let them work autonomously in the background" which means no one will be watching the cost in real time.

Perhaps I can set limits on how much money each task is worth, but very few would estimate that properly.

Aurornis•53m ago
> LLMs are now being positioned as "let them work autonomously in the background"

The only people who believe this level of AI marketing are the people who haven't yet used the tools.

> which means no one will be watching the cost in real time.

Maybe some day there's an agentic coding tool that goes off into the weeds and runs for days doing meaningless tasks until someone catches it and does a Ctrl-C, but the tools I've used are more likely to stop short of the goal than to continue crunching indefinitely.

Regardless, it seems like a common experience for first-timers to try a light task and then realize they've spent $3, instantly setting expectations for how easy it is to run up a large bill if you're not careful.

eterm•3h ago
> Light edits are about 10 cents

Some well-paid developers will excuse this with, "Well if it saved me 5 minutes, it's worth an order of magnitude than 10 cents".

Which is true, however there's a big caveat: Time saved isn't time gained.

You can "Save" 1,000 hours every night, but you don't actuall get those 1,000 hours back.

grepfru_it•53m ago

    Hourly_rate / 12 = 5min_rate

    If light_edit_cost < 5min_rate then savings=true
shepherdjerred•50m ago
> You can "Save" 1,000 hours every night, but you don't actuall get those 1,000 hours back.

What do you mean?

If I have some task that requires 1000 hours, and I'm able to shave it down to one hour, then I did just "save" 999 hours -- just in the same way that if something costs $5 and I pay $4, I saved $

philkuz•1h ago
I think that models are gonna commoditize, if they haven't already. The cost of switching over is rather small, especially when you have good evals on what you want done.

Also there's no way you can build a business without providing value in this space. Buyers are not that dumb.

jstummbillig•4h ago
If you want to use Cline and are at all price sensitive (in these ranges) you have to do manual context management just for that reason. I find that too cumbersome and use Windsurf (currently with Gemini 2.5 pro) for that reason.
BeetleB•4h ago
> I also ended up blowing through $15 of LLM tokens in a single evening.

Consider using Aider, and aggressively managing the context (via /add, /drop and /clear).

https://aider.chat/

danenania•4h ago
My tool Plandex[1] allows you to switch between automatic and manual context management. It can be useful to begin a task with automatic context while scoping it out and making the high level plan, then switch to the more 'aider-style' manual context management once the relevant files are clearly established.

1 - https://github.com/plandex-ai/plandex

Also, a bit more on auto vs. manual context management in the docs: https://docs.plandex.ai/core-concepts/context-management

tmpz22•4h ago
While its being touted for Greenfield projects I've notices a lot of failures when it comes to bootstrapping a stack.

For example it (Gemini 2.5) really struggles with newer ecosystem like Fastapi when wiring libraries like SQLAlchemy, Pytest, Python-playwright, etc., together.

I find more value in bootstrapping myself, and then using it to help with boiler plate once an effective safety harness is in place.

SkyPuncher•1h ago
I loathe using AI in a greenfield project. There are simply too many possible paths, so it seems to randomly switch between approaches.

In a brownfield code base, I can often provide it reference files to pattern match against. So much easier to get great results when it can anchor itself in the rest of your code base.

shepherdjerred•52m ago
$15 in an evening sounds like a great deal when you consider the cost of highly-paid software engineers
taurath•4h ago
> Copilot excels at low-to-medium complexity tasks in well-tested codebases, from adding features and fixing bugs to extending tests, refactoring, and improving documentation.

Bounds bounds bounds bounds. The important part for humans seems to be maintaining boundaries for AI. If your well-tested codebase has the tests built thru AI, its probably not going to work.

I think its somewhat telling that they can't share numbers for how they're using it internally. I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success. There's real stuff in there, and my brain has an insanely hard time separating the trillion dollars of hype from the usefulness.

twodave•4h ago
I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]

In any case, I think this is the best use case for AI in programming—as a force multiplier for the developer. It’s for the best benefit of both AI and humanity for AI to avoid diminishing the creativity, agency and critical thinking skills of its human operators. AI should be task oriented, but high level decision-making and planning should always be a human task.

So I think our use of AI for programming should remain heavily human-driven for the long term. Ultimately, its use should involve enriching humans’ capabilities over churning out features for profit, though there are obvious limits to that.

[0] https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

tmpz22•4h ago
How much of that is protobuf stubs and other forms of banal autogenerate code?
twodave•4h ago
Updated my comment to include the link. As much as 30% specifically generated by AI.
shafyy•4h ago
I would still wager that most of the 30% is some boilterplate stuff. Which is ok. But sounds less impressive with that caveat.
OnionBlender•3h ago
The 2nd paragraph contradicts the title.

The actual quote by Satya says, "written by software".

twodave•2h ago
Sure but then he says in his next sentence he expects 50% by AI in the next year. He’s clearly using the terms interchangeably.
greatwhitenorth•3h ago
How much was previously generated by intellisense and other code gen tools before AI? What is the delta?
ilaksh•3h ago
You might want to study the history of technology and how rapidly compute efficiency has increased as well as how quickly the models are improving.

In this context, assuming that humans will still be able to do high level planning anywhere near as well as an AI, say 3-5 years out, is almost ludicrous.

_se•3h ago
Reality check time for you: people were saying this exact thing 3 years ago. You cannot extrapolate like that.
DeepYogurt•3h ago
> I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]

Similar to google. MS now requires devs to use ai

timrogers•3h ago
We've been using Copilot coding agent internally at GitHub, and more widely across Microsoft, for nearly three months. That dogfooding has been hugely valuable, with tonnes of valuable feedback (and bug bashing!) that has helped us get the agent ready to launch today.

So far, the agent has been used by about 400 GitHub employees in more than 300 our our repositories, and we've merged almost 1,000 pull requests contributed by Copilot.

In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)

(Source: I'm the product lead at GitHub for Copilot coding agent.)

binarymax•3h ago
So I need to ask: what is the overall goal of your project? What will you do in, say, 5 years from now?
timrogers•3h ago
What I'm most excited about is allowing developers to spend more of their time working on the work they enjoy, and less of their time working on mundane, boring or annoying tasks.

Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.

binarymax•3h ago
Thanks for the response… do you see a future where engineers are just prompting all the time? Do you see a timeline in which todays programming languages are “low level” and rarely coded by hand?
petetnt•2h ago
What about developers who do enjoy writing for example high quality documentation? Do you expect that the status quo will be that most of the documentation will be AI slop and AI itself will just bruteforce itself through the issues? How close are we to the point where the AI could handle "tricky dependency updates", but not being able to handle "most interesting and complex problems"? Who writes the tests that are required for the "well tested" codebases for GitHub Copilot Coding Agent to work properly?

What is the job for the developer now? Writing tickets and reviewing low quality PRs? Isn't that the most boring and mundane job in the world?

doug_durham•1h ago
If find your comment "AI Slop" in reference to technical documentation to strange. It isn't a choice between finely crafted prose versus banal text. It's documentation that exists versus documentation that doesn't exist. Or documentation that is hopelessly out of date. In my experience LLMs do a wonderful job in translating from code to documentation. It even does a good job inferring the reason for design decisions. I'm all in on LLM generated technical documentation. If I want well written prose I'll read literature.
petetnt•1h ago
Documentation is not just translating code to text - I don't doubt that LLMs are wonderful at that: that's what they understand. They don't understand users though, and that's what separates a great documentation writer from someone who documents.
doug_durham•1h ago
Great technical documentation rarely gets written. You can tell the LLM the audience they are targeting and it will do a reasonable job. I truly appreciate technical writers, and hold great ones in special esteem. We live in a world where the market doesn't value this.
skydhash•20m ago
The market value good documentation. Anything critical and commonly used is pretty well documented (linux, databases, software like Adobe's,...). You can see how many books/articles have been written about those systems.
bamboozled•8m ago
Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates

So they won’t like working on their job ?

tokioyoyo•3m ago
You know exactly what they meant, and you know they’re correct.
ilaksh•3h ago
That's a completely nonsensical question given how quickly things are evolving. No one has a five year project timeline.
binarymax•3h ago
Absolutely the wrong take. We MUST think about what might happen in several years. Anyone who says we shouldn’t is not thinking about this technology correctly. I work on AI tech. I think about these things. If the teams at Microsoft or GitHub are not, then we should be pushing them to do so.
ilaksh•2h ago
He asked that in the context of an actual specific project. It did not make sense way he asked it. And it's the executive's to plan that out five years down the line.. although I guarantee you none of them are trying to predict that far.
ilaksh•3h ago
What model does it use? gpt-4.1? Or can it use o3 sometimes? Or the new Codex model?
aaroninsf•3h ago
Question you may have a very informed perspective on:

where are we wrt the agent surveying open issues (say, via JIRA) and evaluating which ones it would be most effective at handling, and taking them on, ideally with some check-in for conirmation?

Or, contrariwise, from having product management agents which do track and assign work?

9wzYQbTYsAIc•3h ago
Check out this idea: https://fairwitness.bot (https://news.ycombinator.com/item?id=44030394).

The entire website was created by Claude Sonnet through Windsurf Cascade, but with the “Fair Witness” prompt embedded in the global rules.

If you regularly guide the LLM to “consult a user experience designer”, “adopt the multiple perspectives of a marketing agenc”, etc., it will make rather decent suggestions.

I’ve been having pretty good success with this approach, granted mostly at the scale of starting the process with “build me a small educational website to convey this concept”.

aegypti•2h ago
Tell Claude the site is down!
overfeed•3h ago
> we've merged almost 1,000 pull requests contributed by Copilot

I'm curious to know how many Copilot PRs were not merged and/or required human take-overs.

sethammons•2h ago
textbook survivorship bias https://en.wikipedia.org/wiki/Survivorship_bias

every bullet hole in that plane is the 1k PRs contributed by copilot. The missing dots, and whole missing planes, are unaccounted for. Ie, "ai ruined my morning"

n2d4•1h ago
It's not survivorship bias. Survivorship bias would be if you made any conclusions from the 1000 merged PRs (eg. "90% of all merged PRs did not get reverted"). But simply stating the number of PRs is not that.
MoreQARespect•34m ago
If they measured that too it would make it harder to justify a MSFT P/E ratio of 29.6.
literalAardvark•1h ago
"We need to get 1000 PRs merged from Copilot" "But that'll take more time" "Doesn't matter"
worldsayshi•1h ago
I do agree that some scepticism is due here but how can we tell if we're treading into "moving the goal posts" territory?
overfeed•1h ago
I'd love to know where you think the starting position of the goal posts was.

Everyone who has used AI coding tools interactively or as agents knows they're unpredictably hit or miss. The old, non-agent Copilot has a dashboard that shows org-wide rejection rates for for paying customers. I'm curious to learn what the equivalent rejection-rate for the agent is for the people who make the thing.

NitpickLawyer•2h ago
> In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)

Really cool, thanks for sharing! Would you perhaps consider implementing something like these stats that aider keeps on "aider writing itself"? - https://aider.chat/HISTORY.html

KenoFischer•32m ago
What's the motivation for restricting to Pro+ if billing is via premium requests? I have a (free, via open source work) Pro subscription, which I occasionally use. I would have been interested in trying out the coding agent, but how do I know if it's worth $40 for me without trying it ;).
burnt-resistor•26m ago
When I repeated to other tech people from about 2012 to 2020 that the technological singularity was very close, no one believed me. Coding is just the easiest to automate away into almost oblivion. And, too many non technical people drank the Flavor Aid for the fallacy that it can be "abolished" completely soon. It will gradually come for all sorts of knowledge work specialists including electrical and mechanical engineers, and probably doctors too. And, of course, office work too. Some iota of a specialists will remain to tune the bots, and some will remain in the fields to work with them for where expertise is absolutely required, but widespread unemployment of what were options for potential upward mobility into middle class are being destroyed and replaced with nothing. There won't be "retraining" or handwaving other opportunities for the "basket of labor", but competition of many uniquely, far overqualified people for ever dwindling opportunities.

It is difficult to get a man to understand something when his salary depends upon his not understanding it. - Upton Sinclair

kenjackson•8m ago
I don't think it was unreasonable to be very skeptical at the time. We generally believed that automation would get rid of repetitive work that didn't require a lot of thought. And in many ways programming was seen almost at the top of the heap. Intellectually demanding and requiring high levels of precision and rigor.

Who would've thought (except you) that this would be one of the things that AI would be especially suited for. I don't know what this progression means in the long run. Will good engineers just become 1000x more productive as they manage X number of agents building increasingly complex code (with other agents constantly testing, debugging, refactoring and documenting them) or will we just move to a world where we just have way fewer engineers because there is only a need for so much code.

dsl•25m ago
> In the repo where we're building the agent, the agent itself is actually the #5 contributor

How does this align with Microsoft's AI safety principals? What controls are in place to prevent Copilot from deciding that it could be more effective with less limitations?

bamboozled•9m ago
Haha
ctkhn•2h ago
That's great, our leadership is heavily pushing ai-generated tests! Lol
mjr00•1h ago
From talking to colleagues at Microsoft it's a very management-driven push, not developer-driven. Friend on an Azure team had a team member who was nearly put on a PIP because they refused to install the internal AI coding assistant. Every manager has "number of developers using AI" as an OKR, but anecdotally most devs are installing the AI assistant and not using it or using it very occasionally. Allegedly it's pretty terrible at C# and PowerShell which limits its usefulness at MS.
shepherdjerred•56m ago
If you aren't using AI day-to-day then you're not adapting. Software engineering is not going to look at all the same in 5-10 years.
mjr00•48m ago
What does this have to do with my comment? Did you mean to reply to someone else?

I don't understand what this has to do with AI adoption at MS (and Google/AWS, while we're at it) being management-driven.

antihipocrat•30m ago
That's exactly what senior executives who aren't coding are saying everywhere.

Meanwhile, engineers are using it for code completion and as a Google search alternative.

I don't see much difference here at all, the only habit to change is learning to trust an AI solution as much as a Stack Overflow answer. Though the benefit of SO is each comment is timestamped and there are alternative takes, corrections, caveats in the comments.

evantbyrne•10m ago
It's just tooling. Costs nothing to wait for it to be better. It's not like you're going miss out on AGI. The cost of actually testing every slop code generator is non-trivial.
OutOfHere•4h ago
GitHub had this exact feature late last year itself, perhaps under a slightly different name.
throwup238•4h ago
Are you thinking if Copilot Workspaces?

That seemed to drop off the Github changelog after February. I’m wondering if that team got reallocated to the copilot agent.

WorldMaker•3h ago
Probably. Also this new feature seems like an expansion/refinement of Copilot Workspaces to better fit the classic Github UX: "assign an issue to Copilot to get a PR" sounds exactly like the workflow Copilot Workspaces wanted to have when it grew up.
timrogers•3h ago
I think you're probably thinking of Copilot Workspace (<https://github.blog/news-insights/product-news/github-copilo...>).

Copilot Workspace could take a task, implement it and create a PR - but it had a linear, highly structured flow, and wasn't deeply integrated into the GitHub tools that developers already use like issues and PRs.

With Copilot coding agent, we're taking all of the great work on Copilot Workspace, and all the learnings and feedback from that project, and integrating it more deeply into GitHub and really leveraging the capabilities of 2025's models, which allow the agent to be more fluid, asynchronous and autonomous.

(Source: I'm the product lead for Copilot coding agent.)

softwaredoug•4h ago
Is Copilot a classic case of slow megacorp gets outflanked by more creative and unhindered newcomers (ie Cursor)?

It seems Copilot could have really owned the vibe coding space. But that didn’t happen. I wonder why? Lots of ideas gummed up in organizational inefficiencies, etc?

ilaksh•3h ago
This is a direct threat to Cursor. The smarter the models get, the less often programmers really need to dig into an IDE, even one with AI in it. Give it a couple of years and there will be a lot of projects that were done just by assigning tasks where no one even opened Cursor or anything.
theusus•4h ago
I have been so far disappointed by copilot's offerings. It's just not good enough for anything valuable. I don't want you to write my getter and setter. And call it a day.
rvz•3h ago
I think we expected disappointment with this one. (I expected it at least)[0]

But the upgraded Copilot was just in response to Cursor and Winsurf.

We'll see.

[0] https://news.ycombinator.com/item?id=43904611

asadm•4h ago
In the early days on LLM, I had developed an "agent" using github actions + issues workflow[1], similar to how this works. It was very limited but kinda worked ie. you assign it a bug and it fired an action, did some architect/editing tasks, validated changes and finally sent a PR.

Good to see an official way of doing this.

1. https://github.com/asadm/chota

nodja•4h ago
I wish they optimized things before adding more crap that will slow things down even more. The only thing that's fast with copilot is the autocomplete, it sometimes takes several minutes to make edits on a 100 line file regardless of the model I pick (some are faster than others). If these models had a close to 100% hit rate this would be somewhat fine, but going back and forth with something that takes this long is not productive. It's literally faster to open claude/chatgpt on a new tab and paste the question and code there and paste it back into vscode than using their ask/edit/agent tools.

I've cancelled my copilot subscription last week and when it expires in two weeks I'll mostly likely shift to local models for autocomplete/simple stuff.

brushfoot•3h ago
My experience has mostly been the opposite -- changes to several-hundred-line files usually only take a few seconds.

That said, months ago I did experience the kind of slow agent edit times you mentioned. I don't know where the bottleneck was, but it hasn't come back.

I'm on library WiFi right now, "vibe coding" (as much as I dislike that term) a new tool for my customers using Copilot, and it's snappy.

nodja•3h ago
Here's a video of what it looks like with sonnet 3.7.

https://streamable.com/rqlr84

The claude and gemini models tend to be the slowest (yes, including flash). 4o is currently the fastest but still not great.

NicuCalcea•1h ago
For me, the speed varies from day to day (Sonnet 3.7), but I've never seen it this slow.
BeetleB•3h ago
Several minutes? Something is seriously wrong. For most models, it takes seconds.
nodja•2h ago
2m27s for a partial response editing a 178 line file (it failed with an error, which seems to happen a lot with claude, but that's another issue).

https://streamable.com/rqlr84

joelthelion•3h ago
I don't know, I feel this is the wrong level to place the AI at this moment. Chat-based AI programming (such as Aider) offers more control, while being almost as convenient.
sync•3h ago
Anthropic just announced the same thing for Claude Code, same day: https://docs.anthropic.com/en/docs/claude-code/github-action...
OutOfHere•3h ago
Which model does it use? Will this let me select which model to use? I have seen a big difference in the type of code that different models produce, although their prompts may be to blame/credit in part.
qwertox•2h ago
I assume you can select whichever one you want (GPT-4o, o3-mini, Claude 3.5, 3.7, 3.7 thinking, Gemini 2.0 Flash, GPT=4.1 and the previews o1, Gemini 2.5 Pro and 04-mini), subject to the pricing multiplicators they announced recently [0].

Edit: From the TFA: Using the agent consumes GitHub Actions minutes and Copilot premium requests, starting from entitlements included with your plan.

[0] https://docs.github.com/en/copilot/managing-copilot/monitori...

shwouchk•3h ago
I played around with it quite a bit. it is both impressive and scary. most importantly, it tends to indiscriminately use dependencies from random tiny repos, and often enough not the correct ones, for major projects. buyer beware.
yellow_lead•31m ago
Given that PRs run actions in a more trusted context for private repos, this is a bit concerning.
qwertox•3h ago
In hindsight it was a mistake that Google killed Google Code. Then again, I guess they wouldn't have put enough effort into it to develop into a real GitHub alternative.

Now Microsoft sits on a goldmine of source code and has the ability to offer AI integration even to private repositories. I can upload my code into a private repo and discuss it with an AI.

The only thing Google can counter with would be to build tools which developers install locally, but even then I guess that the integration would be limited.

And considering that Microsoft owns the "coding OS" VS Code, it makes Google look even worse. Let's see what they come up with tomorrow at Google I/O, but I doubt that it will be a serious competition for Microsoft. Maybe for OpenAI, if they're smart, but not for Microsoft.

dangoodmanUT•1h ago
Or they'll just buy Cursor
geodel•1h ago
You win some you lose some. Google could have continued with Google code. Microsoft could've continued with their phone OS. It is difficult to know when to hold and when to fold.
abraham•1h ago
Gemini has some GitHub integrations

https://developers.google.com/gemini-code-assist/docs/review...

candiddevmike•1h ago
Google Cloud has a pre-GA product called "Secure Source Manager" that looks like a fork of Gitea: https://cloud.google.com/secure-source-manager/docs/overview

Definitely not Google Code, but better than Cloud Source Repositories.

fvold•2h ago
The biggest change Copilot has done for me so far is to have me replace my VSCode with VSCodium to be sure it doesn't sneak any uploading of my code to a third party without my knowing.

I'm all for new tech getting introduced and made useful, but let's make it all opt in, shall we?

qwertox•2h ago
Care to explain? Where are they uploading code to?
2OEH8eoCRo0•2h ago
Kicking the can down the road. So we can all produce more code faster but there is NSB. Most of my time isn't spent writing the code anyway.
sudhar172•2h ago
Nice
azhenley•2h ago
Looks like their GitHub Copilot Workspace.

https://githubnext.com/projects/copilot-workspace

net01•2h ago
on a other note https://github.com/github/dmca/pull/17700 GitHub's automated auto-merged DMCA sync PRs get automated copilot reviews for every single one.

AMAZING

quantadev•1h ago
I love Copilot in VSCode. I have it set to use Claude most of the time, but it let's you pick your fav LLM, for it to use. I just open the files I'm going to refactor, type into the chat window what I want done, click 'accept' on every code change it recommends in it's answer, causing VSCode to auto-merge the changes into my code. Couldn't possibly be simpler. Then I scrutinize and test. If anything went wrong I just use GitLens to rollback the change, but that's very rare.

Especially now that Copilot supports MCP I can plug in my own custom "Tools" (i.e. Function calling done by the AI Agent), and I have everything I need. Never even bothered trying Cursor or Windsurf, which i'm sure are great too, but _mainly_ since they're just forks of VSCode, as the IDE.

SkyBelow•34m ago
Have you tried the agent mode instead of the ask mode? With just a bit more prompting, it does a pretty good job of finding the files it needs to use on its own. Then again, I've only used it in smaller projects so larger ones might need more manual guidance.
alvis•1h ago
God save the juniors...
sethops1•11m ago
> Copilot coding agent is rolling out to GitHub Mobile users on iOS and Android, as well as GitHub CLI.

Wait, is this going to pollute the `gh` tool? Please tell me this isn't happening.

hidelooktropic•5m ago
UX-wise...

I kind of love the idea that all of this works in the familiar flow of raising an issue and having a magic coder swoop in and making a pull request.

At the same time, I have been spoiled by Cursor. I feel I would end up preferring that the magic coder is right there with me in the IDE where I can run things and make adjustments without having to do a followup request or comment on a line.