frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Ask HN: How much of OpenAI code is written by AI?

54•growbell_social•6h ago•41 comments

Ask HN: Battery life for graphical Linux VMs (or Asahi) on Apple Silicon laptops

2•evertedsphere•4h ago•1 comments

Is making the rust compiler slow a billion dollar mistake?

4•breatheoften•8h ago•5 comments

Ask HN: How are you productively using Claude code?

15•nocobot•18h ago•10 comments

Open-source STM32 autopilot for long-range fixed-wing UAVs (SmartNavX)

6•Talalalsohimiy•9h ago•0 comments

Ask HN: How did Soham Parekh get so many jobs?

317•jshchnz•1w ago•415 comments

Ask HN: Could the C64 startup screen have encouraged more users to learn BASIC?

5•amichail•11h ago•6 comments

Ramanujan-Computing: Distributed Computing with Idle Smart Devices: Open-Source

3•ps_ramanujan•14h ago•0 comments

Ask HN: How do you get first 10 customers?

11•jcofai•1d ago•17 comments

Ask HN: Has anybody gotten a Node.js MCP server to work with HTTP?

4•rglover•1d ago•4 comments

Attended Windsurf's Build Night 18 hours before founders joined Google DeepMind

7•schwentkerr•1d ago•0 comments

Ask HN: Looking for a directory of PS1 command prompts. Like awesome lists

4•simplecto•1d ago•5 comments

Ask HN: Are there any tools for tracking GPU prices over time?

2•jepeake•1d ago•4 comments

Ask HN: Where are the AI-driven profits or promotions?

39•arduinomancer•22h ago•43 comments

Tell HN: I Lost Joy of Programming

91•Eatcats•5d ago•124 comments

Co-founder exiting after pivot – what's a fair exit package?

42•throwaway-xx•4d ago•79 comments

Ask HN: Worth leaving position over push to adopt vibe coding?

81•NotAnOtter•1w ago•94 comments

Ask HN: Bug Bounty Dilemma – Take the $$ and Sign an NDA or Go Public?

22•deep_thinker26•4d ago•14 comments

Google fails to dismiss wiretapping claims on SJ, settles with app users

46•1vuio0pswjnm7•4d ago•8 comments

Helpful function to find memory leaks in JavaScript

3•EGreg•3d ago•1 comments

Ask HN: People who work different timezones than your company. How sched?

17•tetris11•5d ago•27 comments
Open in hackernews

Ask HN: How much of OpenAI code is written by AI?

54•growbell_social•6h ago
Amidst the nascent concerns of AI replacing software engineers, it seems a proxy for that might be the amount of code written at OpenAI by the various models they have.

If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.

I'd be curious if any OpenAI engineers can share a rough estimate of their day to day composition of human generated code vs AI generated.

Comments

crop_rotation•6h ago
> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend.

This is a naive take. Throughout history things have been automated with the help of professions who were being automated away.

growbell_social•5h ago
I don't disagree it's a naive take, and I would love to read about some examples where this happened.

I haven't seen too many industries be automated away first hand, and I'm sure there are historic examples. I wouldn't expect the lamp lighters to have been championing the rise of electric lamps. Maybe they did though because it meant they could work less hours.

madeofpalk•3h ago
Software developers have been automating away software developers since the beginning of the field. Higher level languages, better+safer languages, improved IDEs. All the ways in which we made developing software easier and more efficient. QA teams investing in automated testing.
est31•1h ago
It's everywhere.

One example is probably blacksmiths. Before, each larger village used to have one. Nowadays, this is not the case any more as people order replacement parts for whatever metal piece that broke in your car, house, etc. The metal molding happens in factories instead. But in order you still need to have people who know how to mold metals. The early folks for this profession were blacksmiths. Nowadays you would not call them any more. But it's definitely been "engineers with AI".

Same goes for industrialized food. The huge industrial bakeries still employ people who know how to bake bread, but they can do 1000x more than they could do if they had to knead the dough on their own, separate it, form little pieces of bread, order it nicely on a tray, etc.

Now it's a machine doing most pieces of work, but they still have folks who design new bread products, adjust machines, know what to change in the formula if one particular product from one particular supplier is not available in the quantities needed, etc. A lot of the skills needed aligns enough with the traditional job of a baker that these factories employ actual bakers.

As for software, whenever I use AI tools, I don't really feel that they help me save time much. But it's entirely possible that this is going to change in the future. They are only going to get better. And of course a lot of people say it's useful for them, so it already affects folks. Just because something is not useful to me doesn't mean it's not going to be useful to anybody.

iknowSFR•5h ago
Yeah, this needs to be considered down to the individual level. If your employer not only incentives you to go against your best interests but also threatens the stability of your role, then your choices are either do the job or accept that you might not be reliably employed. This is the culmination of decades of moving power from the employee to the employer.
notfried•5h ago
Not OpenAI, but Anthropic CPO Mike Krieger said in response to a question of how much of Claude Code is written by Claude Code: "At this point, I would be shocked if it wasn't 95% plus. I'd have to ask Boris and the other tech leads on there."

[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...

PostOnce•4h ago
TFA says "How Anthropic uses AI to write 90-95% of code for some products and the surprising new bottlenecks this creates".

for some products.

If it were 95% of anything useful, Anthropic would not still have >1000 employees, and the rest of the economy would be collapsing, and governments would be taking some kind of action.

Yet none of that appears to be happening. Why?

ebiester•4h ago
I don't doubt it, especially when you have an organization that is focused on building the most effective tooling possible. I'd imagine that they use AI even when it isn't the most optimal, because they are trying to build experiences that will allow everyone else to do the same.

So let's take it on face value and say 95% is written by AI. When you free one bottleneck you expose the next. You still need developers to review it to make sure it's doing the right thing. You still need developers to be able to translate the business context into instructions that make the right product. You have to engage with the product. You need to architect the system - the context windows mean that the tasks can't just be handed off to AI.

So, The role of the programmer changes - you still need technical competence, but to serve the judgement calls of "what is right for the product?" Perhaps there's a world where developers and product management merges, but I think we will still need the people.

aforwardslash•3h ago
Been using claude code almost daily for over a month. It is the smartest junior developer I've ever seen; it can spew high-quality advanced code and with the same confidence, spew utter garbage or over-engineered crap; it can confidently tell you a task is done and passing tests, with glaring bugs in it; it can happily introduce security bugs if it's a shurtcut to finish something. And sometimes, will just tell you "not gonna do it, it takes too much time, so here's a todo comment". In short, it requires constant supervision and careful code review - you still need experienced developers for this.
sothatsit•3h ago
> If it were 95% of anything useful, Anthropic would not still have >1000 employees

I think firing people does not come as a logical conclusion of 95% of code being written by Claude Code. There is a big difference between AI autonomously writing code and developers just finding it easier to prompt changes rather than typing them manually.

In one case, you have an automated software engineer, and may be able to reduce your headcount. In the other, developers may just be slightly more productive or even just enjoy writing code using AI more, but the coding is still very much driven by the developers themselves. I think right now Claude Code shows signs of (1) for simple cases, but mostly falls into the (2) bucket.

dude250711•4h ago
They are likely lying:

https://www.anthropic.com/candidate-ai-guidance

> During take-home assessments Complete these without Claude unless we indicate otherwise. We’d like to assess your unique skills and strengths. We'll be clear when AI is allowed (example: "You may use Claude for this coding challenge").

> During live interviews This is all you–no AI assistance unless we indicate otherwise. We’re curious to see how you think through problems in real time. If you require any accommodations for your interviews, please let your recruiter know early in the process.

He'd have to ask yet did not ask? A CPO of an AI company?

wahnfrieden•3h ago
That's not evidence of anything to do with their hired developers. Interview practices have never reflected on-the-job practices
levocardia•1h ago
95% of Claude Code was actually written on a whiteboard!
another_twist•4h ago
Sure but what did the CTO say ? Also was he shocked ? There's no definitive answer, this is an evasive one.
crinkly•3h ago
Standard CxO mentality. “I think the facts about our product might be true but I won’t say it because the shareholders and SEC will hang me when they find out it’s bullshit.” Then defer to next monkey in circus. By which time the tech press, which seems to have a serious problem with literacy and honesty (gotta get those clicks) extrapolates it for them. Then analysts summarise those things as projections. Urgh.

The other tactic is saying two unrelated things in a sentence and hoping you think it’s causal, not a fuck up and some marketing at the same time.

asadotzler•2h ago
Weasel words. No different than Nadella claiming 50%.

When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."

If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.

kypro•25m ago
It could be true, the primary issue here is that it's the wrong metric. I mean you could write 100% of your code with AI if you were basically telling it exactly what to write...

If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.

However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.

So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.

alyxya•2h ago
It’s worth pointing out that the statement is about how much of Claude Code is written with it and not how much of the codebase of the whole company. In the more critical parts of the codebase where bugs can cause bigger problems, I expect a lot less code to be fully AI generated.
ivraatiems•4h ago
I absolutely believe that a large proportion of new code written is at least in-part AI generated, but that doesn't mean a large proportion of new code is 100% soup-to-nuts/pull-request-to-merge the result of decisions made by an agent and not a human. I doubt that very much.

I think the difference between situations where AI-driven development works and doesn't is going to be largely down to the quality of the engineers who are supervising and prompting to generate that code, and the degree to which they manually evaluate it before moving it forward. I think you'll find that good engineers who understand what they're telling an agent to do are still extremely valuable, and are unlikely to go anywhere in the short to mid term. AI tools are not yet at the point where they are reliable on their own, even for systems they helped build, and it's unclear whether they will be any time soon purely through model scaling (though it's possible).

I think you can see the realities of AI tooling in the fact that the major AI companies are hiring lots and lots of engineers, not just for AI-related positions, but for all sorts of general engineering positions. For example, here's a post for a backend engineer at OpenAI: https://openai.com/careers/backend-software-engineer-leverag... - and one from Anthropic: https://job-boards.greenhouse.io/anthropic/jobs/4561280008.

Note that neither of these require direct experience with using AI coding agents, just an interest in the topic! Contrast that with many companies who now demand engineers explain how they are using AI-driven workflows. When they are being serious about getting people to do the work that will make them money, rather than engaging in marketing hype, AI companies are honest: AI agents are tools, just like IDEs, version control systems, etc. It's up to the wise engineer to use them in a valuable way.

Is it possible they're just hiring these folks to try and make their models better to later replace those people? It's possible. But I'm not sure when in time, if ever, they'll reach the point where that was viable.

osigurdson•4h ago
>> new code is 100% soup-to-nuts/pull-request-to-merge the result of decisions made by an agent

I am beginning to have more success with this in simpler parts of the code. Particularly if you already have a good example of how to do something and you need something very similar. I usually have to do a few tweaks but generally quite useful.

asadotzler•2h ago
A large proportion of code written a quarter century ago was also in part AI generated. IntelliSense is AI and it's been around since the 90s.
mdaniel•22m ago
I would argue intellisense is far closer to a select statement than "ai" anything. What could come after myString.s is <<select method_name from all_methods where type_name = 'String' and method_name like '%s%'>> where some IDEs prefer <<like 's%'>> and others the <<contains>> style

IJ does some truly stellar introspection to offer sane defaults in the current completion context, such as offering only variables of the correct type for parameters but I think of that as discipline and not AI. Plus, IJ never once made up an API that didn't exist

appreciatorBus•4h ago
> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.

Software engineers have been automating away jobs for other people for nearly a century. It would be quite rich if the profession suddenly felt qualms about the process! (TBC I think automation is great and should always be pursued. Ofc there are real human concerns when change happens quickly but I am skeptical that smashing the looms is the best response)

another_twist•4h ago
Software engineering has also been automating its own jobs for ages. The first thing we engineers do when asked to do a repetitive thing is find ways to automate it. I think the industry had qualms about losing their jobs. But honestly what are the examples of people losing their jobs to software ? Everybody says that this has happened many times yet examples are hard to come by.
genidoi•2h ago
> But honestly what are the examples of people losing their jobs to software?

And furthermore, what is the full causality chain that links the precise PR in provided example software to the employment termination decision? Lacking that, can you really assert the software 'automated' a dev out of the job?

add-sub-mul-div•3h ago
> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend.

There are two strong forces at play. Employees generally want to put in the least amount of effort possible and go home at 5. Employers want to save money and pay for fewer employees. AI creates a strong symbiosis here and both sides are focused on a short term win.

senectus1•2h ago
>Employees generally want to put in the least amount of effort possible and go home at 5.

Really?

In my experience coding engineers are coders because they enjoy the challenge. they like building things, breaking things and rebuilding things.

Its the white-collar 9-5 office workers that churn out power-point and word documents that want to do as little as possible.

siddboots•2h ago
Sometime very soon we’ll cross a threshold where most people can do most of their coding through a tool like Claude and be more productive. It will feel like coding still, breaking and building things, but they will get more done in the same time. Everyone will switch.
oulu2006•1h ago
I'm already building whole projects no with AI -- not a single line of code by myself.

I love it, I've coded for 25 years and I don't need to write another line ever again, I just want to build cool things as fast as possible.

pizzalife•55m ago
If you work at a startup, sometimes you have to do extremely mundane and boring things unrelated to your expertise (“wear many hats”). AI is especially useful in those cases so you can quickly go back to working on things you enjoy.
oulu2006•1h ago
not just employees, founders as well :) as a serial founder, I find AI exciting because after my 3rd company, thinking of my 4th was quite exhausting but AI has re-invigorated by ambition to start another company.
charlesju•2h ago
I think this is the wrong question.

The right question is how much human code can a human push now vs prior to AI.

Everything we've done in coding has been assisted.

Prior to this current generation of web applications, we had the advent of concepts like Object Orientated Programming and prior to that even C was a massive move up from Assembly and punch cards.

AI has written a lot of code. AI has written very little high velocity production code by itself (ie. for people with no coding background).

In Ruby on Rails, the concept of fast coding has been around for over 20 years, look up this concept of Scaffolding: https://www.rubyguides.com/2020/03/rails-scaffolding/

So to answer your question,

1. AI has pushed a lot of code 2. AI has pushed almost no code without the oversight of human software engineers 3. Software engineers are pushing a magnitude more code and producing more functional utility and solving more bugs than ever before

I don't know what the future holds, but I do think that this is not a new trend to use software to help humans build faster, and I don't think software has the ability to fully replace humans (yet).

fugalfervor•2h ago
> Software engineers are pushing a magnitude more code and producing more functional utility and solving more bugs than ever before

Citation needed

ythiscoyness•1h ago
More programmers than ever before makes this implicitly true.

It’s not as clever as the author hoped.

charlesju•1h ago
From my personal account, I started with PHP and Perl (high school and college) and then graduated to Ruby on Rails (early dev career) and now its Python and JS.

I would say Ruby on Rails was a 10x on raw PHP in terms of feature specs per hour and AI is a 10x on Ruby on Rails (and its derivatives).

We're probably 100x the developer productivity on a per developer basis from the early days of Web 2.0 with PHP, just a personal anecdote though.

owebmaster•33m ago
> We're probably 100x the developer productivity on a per developer basis from the early days of Web 2.0 with PHP, just a personal anecdote though.

Only if you compare create in PHP 20 years ago vs using wordpress to launch a website. But to create a project like wordpress from zero now is as difficult as it was 20 years ago.

moralestapia•14m ago
>I think this is the wrong question.

Oh, please don't turn this site into StackOverflow 2.0.

OP's question is well defined. Any sentient being can understand what s/he meant.

welder•2h ago
Estimates will always be off compared to a plugin like wakatime tracking the real amount of AI generated code vs human written code.
YZF•1h ago
I don't work for OpenAI and I doubt some random employee is going to come here and share what is likely a secret. I'm in the industry though so I have some idea of what's going on these days, both where I work and more broadly.

AI is getting better at writing code. However writing code is just some fraction of the work of many software engineers. AI doesn't work independently, it needs to be guided, its work needs to be reviewed, tested etc. There are some domains where it does better and some domains where it doesn't. There's a range of "AI" work between auto-complete style work, assisting in understanding a code base, and writing code from some spec or doing other types work.

All in all I would say it's a decent improvement to productivity for many situations. It's really hard to say how much and it's also not a zero sum game, as productivity improves there's more work.

Something to keep in mind is that if you look at a modern software project likely most of the code executing is not code written by the developers of that project. There's a huge stack of open source bits executing for almost any new project.

Specifically in OpenAI you also need to consider what type of software they are likely writing. Some of it may be more or less "vanilla" code and other is likely very specialized/performance critical. The vanilla code like API wrappers or simple front end pieces is likely more amenable to be written by AI whereas the more cutting edge algorithmic/scheduling/optimization work is almost certainly not done by AI. At least yet.

As software organizations become larger there's a lot of overhead and waste. It is possible that AI can enable smaller teams and that has a multiplicative effect because it lets you reduce that waste/overhead. There are likely also software engineers who will become better/adapt to new workflows and some who will not. It's really hard to say where things are going but overall my sense is that this like many other innovations will lead to more software and more jobs and not the other way around. There are many moving pieces here, not just AI itself but geopolitics, macro-economics, etc. Where are those new jobs going to get created, what new types of software/technology are going to be created etc. etc. History seems to show us that we'll adapt/evolve and grow.

theusus•46m ago
IME I have had to review a lot of code written by AI. At certain almost all of it. And sometimes write the code myself because LLMs just don't get it. AI has written 95% of my code but not without any review.
crazylogger•24m ago
This is like asking me "how much of your software is built by the compiler?" -> the answer is 100%.

Ask "how much did you build then?" -> also 100%.

The compiler and I operate on different layers.