frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
1•todsacerdoti•1m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•3m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•4m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•7m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•9m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•9m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•10m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•11m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•12m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•14m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•14m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•15m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•15m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•16m ago•2 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•17m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•17m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•17m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•18m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•18m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
2•ghazikhan205•20m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•20m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•21m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•21m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•21m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•21m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•22m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•22m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•23m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•26m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•26m ago•0 comments
Open in hackernews

My development team costs $41.73 a month

https://philipotoole.com/my-development-team-costs-41-73-a-month/
45•datadrivenangel•5mo ago

Comments

dmitrygr•5mo ago
I think we need a mandatory disclosure on software "was >1% vibecoded" same as we have on allergens for food. This'll prevent its use in any safety-critical place.
darth_avocado•5mo ago
What if the software was not vibe coded but upstream packages were?
dmitrygr•5mo ago
then they will have such stickers, and for each piece of SW we consider the sticker percentage of the transitive closure of dependencies.
sejje•5mo ago
Then the software was vibe coded
_fzslm•5mo ago
If we do, we need to draw clear distinctions between different kinds of AI-driven development.

What % of human intervention was there? A module written for me by AI, that was tightly specced with function signatures and behaviour cases, is going to be far more reliable (and arguably basically is human developed) than something an AI just wrote and filled in all the blanks with.

delfinom•5mo ago
Technically for safety critical, safe code was to be audited and certified to the standard anyway. We don't just pull random open source packages.

Granted vibe coded junk will quickly get avoided if it is poorly written to the point that it makes auditing insufferable.

bee_rider•5mo ago
This shouldn’t really matter, software can also be written by very bad coders.

If you care about safety, you care about the whole process—coding, sure, but also: code review, testing, what the design specs were, and what the failure-path is for when a bug (inevitably) makes it through.

Big companies produce lots of safety critical code, and it is inevitable that some incompetent people will sneak into the gaps, there. So, it is necessary to design a process that accounts for commits written by incompetent people.

bobsomers•5mo ago
Everything you said is 100% correct.

However, part of designing and upholding a safety-critical software development process is looking for places to reduce or eliminate the introduction of bugs in the first place.

Strong type systems, for example, eliminate entire classes of errors, so mandating that code is written in X language is a pro-active process decision to reduce the introduction of certain types of bugs.

Restricting the use of AI tools could very much be viewed the same way.

paulddraper•5mo ago
So you would suggest ">1% in dynamically typed language" disclaimers as well?
kangalioo•5mo ago
If someone made that happen I'd be ecstatic
tracker1•5mo ago
Github shows the breakdown of languages in a project... you can, for the most part already do this... at least for floss on github.
uncircle•5mo ago
> This shouldn’t really matter, software can also be written by very bad coders.

The issue is that there is a non-zero likelihood that a vibe coder pushes code he doesn’t even understand how it actually works. At least a bad coder had to have written the thing themselves in the first place.

bee_rider•5mo ago
For something safety critical, individual programmers shouldn’t be able to push code directly anyway. However, a vibe-coder spamming the process with bad code could cause it to jam up, and prevent forward progress (assuming the project has a well designed safe process).

I guess I did assume, though, that by “in any safety-critical place” they meant a place with a well-defined and rigorous process (surely there’s some safety-critical code out there written by seat-of-the-pants cowboys, but that is just a catastrophe waiting to happen).

paulddraper•5mo ago
You vastly underestimate the power of Ctrl+V
tracker1•5mo ago
I think it's even more likely at a lot of big companies, especially when a lot of upper managers feel like a developer is an interchangeable cog and there isn't any variance in terms of value beyond output.
wiseowise•5mo ago
Given performance of an average SE, that would be an improvement. So I don’t know what you’re saying.
dmitrygr•5mo ago
Failure modes of human coders are well-understood. Failure modes of LLMs are not yet as well understood.
AnotherGoodName•5mo ago
>It doesn’t remember that last week we made a small refactor to make future development easier, or that I abandoned a particular idea as a dead end

Sometimes you need to keep the context and sometimes you need to reset it.

An example of needing to reset. Asking for X, later realizing you meant Y and having the LLM oscillates between them, on an unrelated request it adds X back in, removing Y. Etc.

Clearing the context solves the above. I currently do this by restarting the IDE in Intellij since there isn't a simple button to do it. It's a 100% required feature and knowing LLM contexts and managing them is going to be a basic part of working with LLMs in the future. Yet the concept of the need to do this hasn't quite sunk in yet. It's like the first cars not actually having brakes and the drivers and passengers used to get out and put their feet down. We're at that stage.

What we really need is detailed context history of the AI and a way to manage it well. "Forget i ever asked this prompt". "Keep this prompt in mind next time i restart the IDE" are both examples of extremely important and obvious functionality that just doesn't exist right now.

jokethrowaway•5mo ago
Claude Code has md files with tech notes but it doesn't work too well.

They still need a baby sitter.

do_not_redeem•5mo ago
If the headline is true, this guy values the AI at $41.73, but his own time at $0/hr. If that's how we're measuring things, then my development team costs $0 a month.
otoolep•5mo ago
Blog post author here.

From my perspective I didn't have a development team before. I have one now. I guess I am a member of that team now. But I hadn't thought of it like that -- another strange dimension to working Copilot (and its ilk).

otoolep•5mo ago
Also, I don't value Copilot at $41.73. What actually happens is that GitHub charges me $41.73. I value it at way more. The consumer surplus here is substantial, IMHO.
bee_rider•5mo ago
I wonder what their profit margin is, on an inference. Wonder if it is positive or negative.
otoolep•5mo ago
I wonder the same thing myself, wouldn't surprise me if it's heavily subsidized. So much compute being given away for free.
the__alchemist•5mo ago
> I have [a development team now] now.

This is disconnected enough from how these words are normally used that the statement, and its downstream conclusions don't have a clear interpretation.

patchymcnoodles•5mo ago
That is a very strange calculation for me or I missed something. This is an open-source project, so all human contributors cost zero. He does not count himself as a cost, ok fine and understandable if you don't wanna earn from this project it is kind of an ok look at cost. But if I see it in this relation, because of Copilot his "team" costs now $41.73 a month more than before.

But the real cost that would be interesting is time value: Does he really spends less time for the same feature?

otoolep•5mo ago
Post author here. Few things.

You are right that when someone (a human) submits a PR it didn't cost me anything (short of my time to review it). But those folks are not a team, not someone I could rely on or direct. Open-source projects -- successful ones -- often turn into a company, and then hire a dev team. We all know this.

I have no plans to commercialize rqlite, and I certainly couldn't afford a team of human developers. But I've got Copilot (and Gemini when I use it) now. So, in a sense, I now do have a team. And it's allowed me to fix bugs and add small features I wouldn't have bothered to in the past. It's definitely faster (20 mins to fire up my computer, write the code, push the PR vs. 5 mins to create the GitHub issue, assign to Copilot, review, and merge).

Case in point: I'm currently adding change-data-capture to rqlite. Development is going faster, but it's also more erratic because I'm reviewing more, and coding less. It reminds me of when I've been a TL of a software team.

mjr00•5mo ago
> So, in a sense, I now do have a team.

In another, more accurate sense: no, you have a tool, not a team. A very useful tool, but a tool nonetheless.

If you believe you have a team, try taking a two week vacation and see how much work your team does while you're gone.

Nevermark•5mo ago
There is a new continuum. "Team" is just a convenient word to emphasize that "Tools" are moving significantly in the "Teams" direction.

The post emphasizes the degree this is true/not.

Different people are going to emphasize changing attributes of new situations using different pre-existing words/concepts. That's sensible use of language.

otoolep•5mo ago
>There is a new continuum. "Team" is just a convenient word to emphasize that "Tools" are moving significantly in the "Teams" direction.

Exactly.

mjr00•5mo ago
No, it's clickbait and that's why this submission got flagged, sorry.

A team is comprised of people. Being able to prompt an LLM to create a pull request based on specifications is very useful, but it's not a team member, the same way that VSCode isn't a team member even though autocomplete is a massive productivity increase, the same way that pypi isn't a team member even though a central third party dependency repository makes development significantly faster than not having one.

If this article were "I get a massive productivity boost from $41.73/month in developer tools" it'd be honest. As it is, it's dishonest clickbait.

As the saying goes, there is no "AI" in "Team".

Nevermark•5mo ago
That is not a clickbait title. It is normal use of language, and the articles contents are not surprising or misleading relative to the title.

Titles don't need to be pedantic.

patchymcnoodles•5mo ago
Ok, that's cool that you can develop faster now, but as the other comment: it is a tool, not the cost of a team. It still for me a very strange comparison.

But nonetheless, thanks for the explanation :).

indigodaddy•5mo ago
I also submitted this earlier today prior to this submission, but it was flagged, which I was confused by, so glad this one got through.

This was an interesting article and brought some good points around the fact that the AI never has a continuing backward/forward-looking context about one's project. Perhaps these ideas are being thought about to potentially add as features of LLMs somehow without making it unfeasible from token/context perspective.

nirolo•5mo ago
This is exactly the idea behind the concept of a memory bank that I think Cline introduced first. It serves as a goto for project overview and current scope and goals the project has.
indigodaddy•5mo ago
Ah, I'll check this out thanks.
homarp•5mo ago
https://github.com/cline/cline/blob/main/docs/prompting/clin...
ohdeargodno•5mo ago
Your software is a wrapper around an already existing, widely used, extremely documented project and basically just extends SQLite with what could have been a regular extension.

No shit it's easy. So is a CRUD PHP service.

keeda•5mo ago
Building a robust, performant distributed database on top of a standalone database is not at all trivial. Sure, you could boil it down to "consensus algorithm + DB" but there are countless edge cases. I am not very familiar with this project, but it seems pretty well-tested: https://philipotoole.com/how-is-rqlite-tested/

And you can look at the code here: https://github.com/rqlite/rqlite

Does seem a wee bit more complicated than a CRUD PHP service.