frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

How I build software quickly

https://evanhahn.com/how-i-build-software-quickly/
120•kiyanwang•5h ago

Comments

anonzzzies•3h ago
We encounter many rough drafts (yours) in production systems. If the original devs are still there, it is usually something along the lines of: I showed the rough draft to my manager, they flagged is as done and I was assigned to another task.
mattmanser•3h ago
Work with bad companies, be surprised by poor managers? Who is the "we" in this context, I assume an agency?

So that's not a problem with this process itself. You're describing problems with managers, and problems with developers being unable to handle bad managers.

Even putting aside the manager's incompetence, as a developer you can mitigate this easily in many different ways, here's a few:

- Just don't show it to management

- Deliberately make it obviously broken at certain steps

- Take screen shots of it working and tell people "this is a mockup, I still have to do the hard work of wiring it up"

It's all a balancing act of needing to get feedback from shareholders and managing expectation. If your management is bad, you need to put extra work into managing expectations.

It's like the famous duck story, from Jeff Atwood (see jargon number 4), sometimes you have to manage your managers:

https://blog.codinghorror.com/new-programming-jargon/

anonzzzies•2h ago
Sure, but we actually thrive here; my company gets called in when systems are not functioning, badly broken, etc and they cannot fix it themselves (usually because the people who built it are gone for decades and they just kept it running with ductape for this time). We never stay for long, we just patch the system and deliver a report. But for figuring out what went wrong and writing the report, we find out how it got to be that way and it's always the same; they suck. Talking banks, hospitals, factories, it really doesn't matter; it's all garbage what gets written and 'TODO: will refactor later' is all over the place. We see many companies from the inside and let me tell you; HN is a lovely echo chamber that resembles nothing in the real world.
croes•46m ago
This will get worse with AI
big_hacker•3h ago
I like the writing style. It is simple and effective.

The author said LLM helps. Let's lynch him!

Metalnem•3h ago
I love his writing too! I read this post a few days ago and really liked it, so I started going through his older posts. It's no coincidence that his writing is good—he's actively working to improve it: https://evanhahn.com/economist-style-guide-book-takeaways/.
gbuk2013•3h ago
An important dimension that is not really touched upon in the article is development speed over time. This will decrease with time, project and team size. Minimising the reduction rate may require doing things things that slow down immediate development for the sake of longer term velocity. Some examples would be test, documentation, decision logs, Agile ceremonies etc.

Some omissions during initial development may have a very long tail of negative impact - obvious examples are not wiring in observability into the code from the outset, or not structuring code with easy testing being an explicit goal.

bayindirh•2h ago
Even as a solo developer, I can swear by decision logs, test and documentation, in that order. I personally keep a "lab notebook" instead of a "decision log" which chronicles the design in real-time, which forms basis of the tests and documentation.

Presence of a lab notebook allows me to write better documentation faster, even if I start late, and tests allow me to verify that the design doesn't drift over time.

Starting blind-mindedly for a one-off tool written in a weekend maybe acceptable, but for anything going to live longer, building the slow foundation allows things built on this foundation to be sound, rational (for the problem at hand) and more importantly understandable/maintainable.

Also, as an unpopular opinion, design on paper first, digitize later.

gbuk2013•1h ago
Right, an important part of keeping in mind other future developers working on your codebase. You 6 months later is that other developer once the immediate context is gone from your head. :)
bayindirh•1h ago
That's very true. I like to word this a little differently:

> Six months ago, only I and God knew how this code worked. Now, only God knows. :)

lou1306•3h ago
This is why I have grown to appreciate gradual typing, at least for solo projects. In Python-land I can just riff over a few functions/scripts until I get a rough idea of the APIs/workflows I want, then bring mypy into the mix and shape things into their "final" form (this takes maybe a few hours away). Rinse repeat for each new feature, but at every iteration you build up from a "nicely-typed" foundation.

Sometimes a redesign of the types you relied on becomes necessary to accommodate new stuff, but that would be true under any language; otoh, the "exploratory" part of coding feels faster and easier.

KronisLV•2h ago
The initial rough draft almost reminds me of the old "Build One to Throw Away" approach, which I think is pretty nice - not getting caught up in making something production ready, but rather exploring the problem space first.

I do admit that modern frameworks also help in that regard, instead of just stitching libraries together, at least for typical webdev stuff instead of more minimalistic CLI utilities or tools. The likes of Ruby on Rails, Django, Laravel, Express and even the likes of ASP.NET or Spring Boot.

manmal•2h ago
Where I live we have this saying about building houses: Build one for your enemy, then build one for your friend, and then build your own one.
rmdashrfv•2h ago
> It can reveal “unknown unknowns”. Often, prototypes uncover things I couldn’t have anticipated.

This is the exact opposite of my experience. Every time I am playing around with something, I feel like I'm experiencing all of its good and none of its bad ... a honeymoon phase if you will.

It's not until I need to cover edge cases and prevent all invalid state and display helpful error messages to the user, and eliminate any potential side effects that I discover the "unknown unknowns".

skrebbel•2h ago
I think you're talking about unknown unknowns in the tool/framework/library. I think the author is talking about unknown unknowns in the problem space.
nasretdinov•2h ago
Yeah, typically when you start thinking something through and actually implementing stuff you can notice that some important part of the behaviour is missing and it might also be something that means that the project is no longer feasible
rmdashrfv•2h ago
I think this applies to both tools/frameworks/libs and problem spaces
rmdashrfv•2h ago
I was talking about both. Sometimes even in a problem space time constraints demand that you utilize something off the shelf (whether you use part of it or build on top of a custom version of it).

Tools aside, I think everyone who has 10+ years can think of a time they had a prototype go well in a new problem space only to realize during the real implementation that there were still multiple unknown unknowns.

rightbyte•1h ago
Ye it is something like how making tools just you yourself use is so smooth. Like, they can be full of holes and be a swaying house of cards in general but you still can use them sucessfully.
astrobe_•1h ago
Yes. I wanted to warn about a rough draft being too rough. There are corners one shouldn't cut because this is where the actual problems are. I guess that rally pilots do their recon at a sustained pace, otherwise they might not realize that e.g. the bump there before the corner is vicious.
nasretdinov•2h ago
One important aspect, also highlighted by others, is that for the long term you actually _don't_ want to focus solely on the immediate task you're solving. Sure, short term the tasks are getting done quicker, but since the end goal typically is implementing a full coherent solution you _have_ to step back and take a look at a bigger picture every now and then. Typically you won't be allocated specific time when to do this, so this "take a bird's eye view" part has to be incorporated into day-to-day work instead. It's also typically easier to notice bigger issues while you're already in the trenches, compared to doing "cleanup" separately "later".
pythonbase•2h ago
This post resonates deeply with how I build products, especially in the era of LLMs and AI-assisted coding.

I usually start top-down, sketching the API surface or UI scaffold before diving into real logic. Iteration drives the process: get something running, feel out the edges, and refine.

I favor MVPs that work end-to-end, to validate flow and reduce risk early. That rhythm helps me ship production-ready software quickly, especially when navigating uncertainty.

One recent WIP: https://zero-to-creator.netlify.app/. I built it for my kid, but I’m evolving it into a full-blown product by tweaking the edges as I go.

albertgoeswoof•2h ago
> Data modeling is usually important to get right, even if it takes a little longer. Making invalid states unrepresentable can prevent whole classes of bugs. Getting a database schema wrong can cause all sorts of headaches later

So much this.

Get the data model right before you go live, and everything is so simple, get it wrong and be prepared for constant pain balancing real data, migrations, uptime and new features. Ask me how I know

marginalia_nu•2h ago
I think scale matters quite a lot here.

If you're building something yourself or in a small team, I absolutely agree with everything written in the post. In fact, I'd emphasize you should lean into this sort of quick and dirty development methodology in such a context, because this is the strength of small scale development. Done correctly it will have you running circles around larger operations. Bugs are almost always easy to fix later for a small team or solo dev operation as you can expect everyone involved to have a nearly perfect mental model of the entire project, and the code itself will regardless of the messes you make tend to keep relatively simple due to Conway's law.

In larger development projects, fixing bugs and especially architectural mistakes is exponentially more expensive as code understanding is piecemeal, the architecture is inevitably nightmarishly complex (Conway again), and large scale refactoring means locking down parts of the code base so that dozens to hundreds of people can't do anything (which means it basically never happens). In such a setting the overarching focus is and should be on correctness at all steps. The economies of scale will still move things forward at an acceptable pace, even if individual developers aren't particularly productive working in such a fashion.

jillesvangurp•2h ago
Fast builds are important. I've been doing server side stuff for a few decades and there are some things you can do to turn slow builds into fast builds. I mostly work on the JVM but a lot of this stuff ports well to other stacks (e.g. Ruby or python).

Basically there are things you can't avoid that are not necessarily fast (e.g. compilation, docker build, etc.) and things that you can actually control and optimize. Tests and integration tests are part of that. Learning how to write good effective tests that are quick to run is important. Because you might end up with hundreds of those and you'll be spending a lot of your career waiting for those to run. Over and over again.

Here's what I do:

- I run integration tests concurrently. My CPUs max out when I run my tests. My current build runs around 400 integration tests in about 35 seconds. Integration test means the tests are proper black box tests that hit a REST API with my server talking to a DB, Elasticsearch and Redis. Each test might require users/teams and some content set up. We're talking many thousands of API calls happening in about 35 seconds.

- There is no database cleanup in between tests. Database cleanup is slow. Each build starts with an ephemeral docker container. So it starts empty but by the time the build is over you have a pretty full database.

- To avoid test interaction, all data is randomized. I use a library that generates human readable names, email addresses, etc. Creating new users/teams is fast, recreating the database schema isn't. And because at any time there can be 10 separate tests running, you don't want this anyway. Some tests share the same read only test fixture and team. Recreating the same database content over and over again is stupid.

- A proper integration test is a scenario that is representative of what happens in your real system. It's not a unit test. So the more side effects, the better. Your goal is to find anything that might break when you put things together. Finding weird feature interactions, performance bottlenecks, and sources of flakiness is a goal here and not something you are trying to avoid. Real users don't use an empty system. And they won't have it exclusive to themselves either. So having dozens of tests running at the same time adds realism.

- Unit tests and integration tests have different goals. With integration tests you want to cover features, not code. Use unit tests for code coverage. The more features an integration test touches, the better. There is a combinatorial explosion of different combinations of inputs. It's mathematically impossible to test all of them with an integration test. So, instead of having more integration tests, write better scenarios for your tests. Add to them. Refine them with detail. Asserting stuff is cheap. Setting things up isn't. Make the most of what you setup.

- IMHO anything in between scenario tests and unit tests is a waste of time. I hate white box tests. Because they are expensive to run and write and yet not as valuable as a good blackbox integration test. Sometimes you have to. But these are low value, high maintenance, expensive to run tests. A proper unit tests is high value, low maintenance and very fast to run (it mocks/stubs everything it needs, there is no setup cost). A proper integration tests is high value, low maintenance, and slow to run. You justify the time investment with value. Low maintenance here means not a lot of code is needed to set things up.

- Your integration test becomes a load and stress test as well. Many teams don't bother with this. I run mine 20 times a day. Because it only takes less than a minute. Anything that increases that build time, gets identified and dealt with. My tests passing gives me a high degree of certainty that nothing important has broken.

- Most of the work creating a good test is setting up the given part of a BDD style test. Making that easy with some helper functions is key. Most of my tests require users, teams, etc. and some objects. So I have a function "createTeam" with some parameters that call all the APIs to get that done. This gets called hundreds of time in a build. It's a nice one liner that sets it up. Most of my tests read like this: create a team or teams, do some stuff, assert, do more stuff, assert, etc.

- Poll instead of sleeping. A lot of stuff happens asynchronously so there is a lot of test code that waits for shit to happen. I use kotest-assertions which has a nice "eventually" helper that takes a block and runs that until it stops throwing exceptions (or times out). It has configurable interval that it tries again that backs off with increasing sleep periods. Most things just take a second or two to happen.

- If your CPUs are not maxed out during the test, you need to be running more tests, not less. Server tests tend to be IO blocked, not CPU blocked. And your SSD is unlikely to be the bottleneck. We're talking network IO here. And it's all running on localhost. So, if your CPUs are idling, you can run more tests and can use more threads, co-routines, whatever.

- Get a decent laptop and pay for fast CI hardware. It's not worth waiting 10 minutes for something that could build in about a minute. That speedup is worth a lot. And it's less likely to break your flow state.

This stuff is a lot easier if you engineer and plan for it. Introducing concurrently running tests to a test suite that isn't ready for it can be hard. Engineering your tests to be able to support running concurrently results in better tests. So if you do this properly, you get better tests that run faster. Win win. I've been doing this for a while. I'm very picky about what is and isn't a good test. There are a lot of bad tests out there.

MoreQARespect•1h ago
I find people overfocus on fast running tests, often to the exclusion of tests which test realistically and loosely couple to the code.

This is a pretty easy and natural thing to do because it's quite easy to go "I shaved 2.5 minutes off my build" whereas "I increased the maintainability and realism of our tests, adding 3 minutes to the build" is a much more nebulous and hard thing to justify even when it does save you time in the long run.

As Drucker says, what gets "measured gets managed" <- quantifiable metrics get more attention even when they're less important.

>A proper unit tests is high value, low maintenance and very fast to run (it mocks/stubs everything it needs, there is no setup cost).

^^ this is a case in point, mocks and stubs do make fast running test code but they commensurately decrease the realism of that test and increase maintenance overhead. Even in unit tests I've shifted to writing almost zero mocks and stubs and using only fakes.

I've had good luck writing what I call "end to end unit tests" where the I/O boundary is faked while everything underneath it is tested as is, but even this model falls over when the I/O boundary you're faking is large and complex.

In database heavy applications, for instance, so much of the logic will be in this layer that a unit test will demand massive amounts of mocks/stubs and commensurate maintenance and still tell you almost nothing about what broke or what works.

kukkeliskuu•2h ago
In recent years, I have learned how to build sufficiently robust systems fast.

Here are some things I have learned:

* Learn one tool well. It is often better to use a tool that you know really well than something that on the surface seems to be more appropriate for the problem. For extremely large number of real-life problems, Django hits the sweet spot.

Several times I have started a project thinking that maybe Django is too heavy, but soon the project outgrew the initial idea. For example, I just created a status page app. It started as a single file Django app, but luckily realized soon that it makes no sense to go around Djangos limitations.

* In most applications that fit the Django model, data model is at the center of everything. Even if making a rought prototype, never postpone data model refactoring. It just becomes more and more expensive and difficult to change over time.

* Most applications don't need to be single-page apps nor require heavy frontend frameworks. Even for those that can benefit from it, traditional Django views is just fine for 80% of the pages. For the rest, consider AlpineHJS/HTMX

* Most of the time, it is easier to build the stuff yourself. Need to store and edit customers? With Django, you can develop simple a CRM app inside your app in just few hours. Integrating commercial CRM takes much more time. This applies to everything: status page, CRM, support system, sales processes, etc. as well as most Django apps/libraries.

* Always choose extremely boring technology. Just use python/Django/Postgres for everything. Forget Kubernetes, Redis, RabbitMQ, Celery, etc. Alpine/HTMX is an exception, because you can avoid much of the Javascript stack.

physicsguy•1h ago
I also like Django a lot. I can get a working project up and running trivially fast.

In my day job I work with Go and while it's fine, I end up writing 10x more code for simple API endpoints and as soon as you add query parameters for filtering, pagination, etc. etc. it gets even longer. Adding a permissions model on top does similar. Of course there's a big performance difference but largely the DB queries dominate performance, even in Python, at least for most of the things I do.

aatd86•1h ago
oh that's interesting. Is that due to missing libraries in Go? That could be a nice open source project if so.
physicsguy•1h ago
There's an almost pathological resistance to using anything that might be described as a 'framework' in the Go community in the name of 'simplicity'.

I find such a blanket opinion to be unhelpful, what's fine for writing microservices is less good for bootstrapping a whole SaaS app and I think that people get in a bit too much of an ideological tizz about it all.

aatd86•55m ago
I don't think anyone would advise to do everything from scratch all the time.

It's mostly about libraries vs opinionated frameworks.

No one in their right mind would say: just use the standard library but I've seen it online. That discourse is not helping.

I think people get this miscontrued on both sides.

A set of reusable, composable libraries would be the right balance in Go. So not really a "framework" either.

I think that reflects better the actual preferred stance.

physicsguy•35m ago
It's always going to be more work with composable libraries since they don't 'flow'.

Just picking one of the examples I gave, pagination - that requires (a) query param handling (b) passing the info down into your database query (c) returning the pagination info in the response. In Django (DRF), that's all built in, you can even set the default pagination for every endpoint with a single line in your settings.py and write no more code.

In Go your equivalent would be wrangling something (either manually or using something like ShouldBindQuery in Gin) to decode the specific pagination query params and then wrangling that into your database calling code, and then wrangling the results + the pagination results info back.

Composable components therefore always leave you with more boilerplate

kisamoto•1h ago
Fully agree. I would also say it's easy enough to use Django for (almost) everything for a self contained SaaS startup. Marketing can be done via Wagtail. Support is managed by a reusable app that is a simple static element on every page (similar to Intercom) that redirects to a standard Django page, collects some info about the issue including the user who made it (if authenticated) etc.

I try to simplify the stack further and use SQLite with Borg for backups. Caching leverages Diskcache.

Deployment is slightly more complicated. I use containers and podman with systemd but could easily be a git pull & gunicorn restart.

My frontend practices have gone through some cycles. I found Alpine & HTMX too restrictive to my liking and instead prefer to use Typescript with django-vite integration. Yes it means using some of the frontend tooling but it means I can use TailwindCSS, React, Typescript etc if I want.

GaryNumanVevo•1h ago
I feel like Django has the largest RoI of any framework out there
physicsguy•48m ago
I think Rails is stiff competition, it's just I prefer Python.
creshal•33m ago
> Always choose extremely boring technology. Just use python/Django/Postgres for everything.

Hell, think twice before you consider postgres. Sqlite scales further than most people would expect it to, especially for local development / spinning up isolated CI instances. And for small apps it tends to be good enough for production too.

benterix•27m ago
> Forget Kubernetes, Redis

While I agree with you, these two are the boring tech of 2025 for me. They work extremely reliably, they have well-defined use cases where they work perfectly and we know very well where they shouldn't be used, we know their gotchas, the interest around them seems to slowly wane. Personally, I'm a huge fan of these, just because they're very stable and they do what they are supposed to do.

whstl•1h ago
I actually try to build it "well" in the first pass, even for prototyping. I'm not gonna say I succeed but at least I try.

This doesn't mean writing tests for everything, and sometimes it means not writing tests at all, but it means that I do my best to make code "testable". It shouldn't take more time to do this, though: if you're making more classes to make it testable, you're already messing it up.

This also doesn't mean compromising in readability, but it does mean eschewing practices like "Clean Code". Functions end up being as large as they need to be. I find that a lot of people doing especially Ruby and Java tend to spend too much time here. IMO having lots of 5-line functions is totally unnecessary, so I just skip this step altogether.

It also doesn't mean compromising on abstractions. I don't even like the "rule of three" because it forces more work down the line. But since I prefer DEEP classes and SMALL interfaces, in the style of John Ousterhout, the code doesn't really take longer to write. It does require some thinking but it's nothing out of the ordinary at all. It's just things that people don't do out of inertia.

One thing I am a bit of hardliner about is scope. If the scope is too large, it's probably not prototype or MVP material, and I will fight to reduce it.

EDIT: kukkeliskuu said below "learn one tool well". This is also key. Don't go "against the grain" when writing prototypes or first passes. If you're fighting the framework, you're on the wrong path IME.

vendiddy•1h ago
I personally find that doing it well in the first pass slows me down and also ends up in worse overall designs.

But I am also pretty disciplined on the 2nd pass in correcting all of the hacks and rewriting everything that should be rewritten.

There are two problems I have with trying to do it right the first time:

- It's hard to know the intricacies of the requirements upfront without actually implementing the thing, which results in designing an architecture with imperfect knowledge

- It's easy to get stuck in analysis paralysis

FWIW I am a huge fan of John Ousterhout. It may be my all time favorite book on software design.

amelius•1h ago
How I build software quickly: get rid of team members first, communication slows things down.
ChrisMarshallNY•1h ago
I've found that a "rough draft" is pretty hard to maintain as a "draft," when you have a typical tech manager.

Instead, it becomes "final ship" code.

I tend to write ship code from the start, but do so, in a manner that allows a lot of flexibility. I've learned to write "ship everywhere," even my test harnesses tend to be fairly robust, ship-Quality apps.

A big part of that, is very high-Quality modules. There's always stuff that we know won't change, or, if so, a change is a fairly big deal, so we sequester those parts into standalone modules, and import them as dependencies.

Here's an example of one that I just finished revamping[0]. I use it in this app[1], in the settings popover. I also have this[2] as a baseline dependency that I import into almost everything.

It can make it really fast, to develop a new application, and can keep the Quality pretty high, even when that's not a principal objective.

[0] https://github.com/RiftValleySoftware/RVS_Checkbox

[1] https://github.com/RiftValleySoftware/ambiamara

[2] https://github.com/RiftValleySoftware/RVS_Generic_Swift_Tool...

pandemic_region•1h ago
Tangent, is it a Swift thing to have "* ################################################################## / comment markers ?

It becomes quickly very visually dominant in the source code:

> / ###################################################################################################################################### / // MARK: - PUBLIC BASE CLASS OVERRIDES - / ###################################################################################################################################### */

ChrisMarshallNY•53m ago
Nope. It's a "Me" thing. I write code that I want to see. I have fairly big files, and it makes it easy to scroll through, quickly. It also displays well, when compiling with docc or Jazzy.

My comment/blank line-to-code ratio is about 50/50. Most of my comments are method/function/property headerdoc/docc labels.

Here's the cloc on the middle project:

    github.com/AlDanial/cloc v 2.04  T=0.03 s (1319.9 files/s, 468842.4 lines/s)
    -------------------------------------------------------------------------------
    Language                     files          blank        comment           code
    -------------------------------------------------------------------------------
    Swift                           33           1737           4765           5220
    -------------------------------------------------------------------------------
    SUM:                            33           1737           4765           5220
    -------------------------------------------------------------------------------
paffdragon•59m ago
This is very familiar. Rough draft, some manual execution often wrapped in a unit test executor, or even written in a different scripting language just to verify the idea. This often helped me to show that we don't even want to build the thing, because it won't work the way people want it to.

The part about distraction in code feels also very real. I am really prone to "clean up things", then realize I'm getting into a rabbit hole and my change grows to a size that my mates won't be happy reviewing. These endeavors often end with complete discard to get back on track and keep the main thing small and focused - frequent small local commits help a lot here. Sometimes I manage to salvage something and publish in a different PR when time allows it.

Business mostly wants the result fast and does not understand tradeoffs in code until the debt hits the size of a mountain that makes even trivial changes painfully slow. But it's about balance, which might be different on different projects.

Small, focused, simple changes definitely help. Although, people are not always good at slicing a larger solution into smaller chunks. I sometimes see commits that ship completely unused code unrelated to anything with a comment that this will be part of some future work...then prio shifts, people come and go, and a year later we have to throw out all of that, because it does not apply to the current state and no one knows anymore what was the plan with that.

hamdouni•59m ago
When possible, I try to use real data for both volumetry and heterogeneity testing.

It helps reveal unknowns in the problem space that synthetic data might miss.

hobofan•39m ago
> For example, if you’re making a game for a 24-hour game jam, you probably don’t want to prioritize clean code. That would be a waste of time! Who really cares if your code is elegant and bug-free?

Hate to be an anecdote Andy here, but as someone who has done a lot of code review at (non-game) hackathons in the past (primarily to prevent cheating), the teams that performed the best were also usually the ones with the best code quality and often at least some rudimentary testing setup.

bob1029•15m ago
The gaming use case is what makes this apt advice. If you've got 24h to make a game and you're spending more than ~1h worrying about the source code cleanliness, I don't think it's gonna go well.

Systems like UE blueprints showcase how pointless the pursuit of clean anything is when contrasted with the resulting product experiences.

Impacts of Adding PV Solar System to Internal Combustion Engine Vehicles

https://www.jstor.org/stable/26169128
10•red369•51m ago•2 comments

Show HN: Refine – A Local Alternative to Grammarly

https://refine.sh
196•runjuu•6h ago•91 comments

How I build software quickly

https://evanhahn.com/how-i-build-software-quickly/
122•kiyanwang•5h ago•48 comments

Death by a Thousand Slops

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/
58•robin_reala•49m ago•16 comments

Let's Learn x86-64 Assembly (2020)

https://gpfault.net/posts/asm-tut-0.txt.html
300•90s_dev•13h ago•66 comments

Show HN: Ten years of running every day, visualized

https://nodaysoff.run
548•friggeri•3d ago•223 comments

Apple's Browser Engine Ban Persists, Even Under the DMA

https://open-web-advocacy.org/blog/apples-browser-engine-ban-persists-even-under-the-dma/
201•yashghelani•4h ago•102 comments

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

https://arxiv.org/abs/2502.17424
122•martythemaniak•12h ago•37 comments

Binding Application in Idris

https://andrevidela.com/blog/2025/binding-application/
38•matt_d•3d ago•2 comments

Google is tracking you (even when using DuckDuckGo)

https://www.simpleanalytics.com/blog/google-is-tracking-you-even-when-you-use-duck-duck-go
36•basquiyacht•1h ago•22 comments

How does a screen work?

https://www.makingsoftware.com/chapters/how-a-screen-works
450•chkhd•21h ago•92 comments

Show HN: Built a desktop app to organize photos locally with duplicate detection

https://organizer.flipfocus.nl/
27•mcvanhassel•4d ago•30 comments

OpenCut: The open-source CapCut alternative

https://github.com/OpenCut-app/OpenCut
363•nateb2022•14h ago•115 comments

APKLab: Android Reverse-Engineering Workbench for VS Code

https://github.com/APKLab/APKLab
122•nateb2022•14h ago•8 comments

The underground cathedral protecting Tokyo from floods (2018)

https://www.bbc.com/future/article/20181129-the-underground-cathedral-protecting-tokyo-from-floods
121•barry-cotter•4d ago•40 comments

Concurrent Programming with Harmony

https://harmony.cs.cornell.edu/book/
14•todsacerdoti•3d ago•0 comments

A technical look at Iran's internet shutdowns

https://zola.ink/blog/posts/a-technical-look-at-irans-internet-shutdown
196•znano•19h ago•84 comments

Lasagna Battery Cell

https://amazingribs.com/more-technique-and-science/more-cooking-science/reactive-pans/
26•nixass•3d ago•3 comments

East Asian air cleanup likely contributed to acceleration in global warming

https://www.nature.com/articles/s43247-025-02527-3
62•defrost•2h ago•50 comments

Show HN: ArchGW – An intelligent edge and service proxy for agents

https://github.com/katanemo/archgw/
89•honorable_coder•1d ago•9 comments

Burning a Magnesium NeXT Cube (1993)

https://simson.net/ref/1993/cubefire.html
48•leoapagano•3d ago•17 comments

Telefónica DE shifts VMware support to Spinnaker due to cost

https://www.theregister.com/2025/07/11/telefnica_germany_shifts_vmware_support/
27•rbanffy•3h ago•18 comments

The Scourge of Arial (2001)

https://www.marksimonson.com/notebook/view/the-scourge-of-arial/
45•andsoitis•10h ago•34 comments

The upcoming GPT-3 moment for RL

https://www.mechanize.work/blog/the-upcoming-gpt-3-moment-for-rl/
209•jxmorris12•4d ago•88 comments

A Century of Quantum Mechanics

https://home.cern/news/news/physics/century-quantum-mechanics
48•bookofjoe•4d ago•43 comments

C3 solved memory lifetimes with scopes

https://c3-lang.org/blog/forget-borrow-checkers-c3-solved-memory-lifetimes-with-scopes/
121•lerno•2d ago•98 comments

Show HN: FFmpeg in plain English – LLM-assisted FFmpeg in the browser

https://vidmix.app/ffmpeg-in-plain-english/
110•bjano•3d ago•25 comments

Show HN: A Raycast-compatible launcher for Linux

https://github.com/ByteAtATime/raycast-linux
173•ByteAtATime•18h ago•50 comments

GLP-1s are breaking life insurance

https://www.glp1digest.com/p/how-glp-1s-are-breaking-life-insurance
343•alexslobodnik•17h ago•413 comments

Myanmar’s proliferating scam centers

https://asia.nikkei.com/static/vdata/infographics/myanmar-scam-centers/
90•WaitWaitWha•7h ago•28 comments