frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AI will make formal verification go mainstream

https://martin.kleppmann.com/2025/12/08/ai-formal-verification.html
435•evankhoury•7h ago•212 comments

alpr.watch

https://alpr.watch/
689•theamk•12h ago•338 comments

No Graphics API

https://www.sebastianaaltonen.com/blog/no-graphics-api
491•ryandrake•9h ago•90 comments

Announcing the Beta release of ty

https://astral.sh/blog/ty
411•gavide•8h ago•80 comments

Midjourney is alemwjsl

https://www.aadillpickle.com/blog/midjourney-is-alemwjsl
131•aadillpickle•6d ago•47 comments

GPT Image 1.5

https://openai.com/index/new-chatgpt-images-is-here/
366•charlierguo•11h ago•183 comments

Introduction to Software Development Tooling (2024)

https://bernsteinbear.com/isdt/
41•vismit2000•3h ago•4 comments

Pricing Changes for GitHub Actions

https://resources.github.com/actions/2026-pricing-changes-for-github-actions/
550•kevin-david•11h ago•627 comments

I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in hours

https://simonwillison.net/2025/Dec/15/porting-justhtml/
100•pbowyer•6h ago•57 comments

No AI* Here – A Response to Mozilla's Next Chapter

https://www.waterfox.com/blog/no-ai-here-response-to-mozilla/
179•MrAlex94•7h ago•113 comments

40 percent of fMRI signals do not correspond to actual brain activity

https://www.tum.de/en/news-and-events/all-news/press-releases/details/40-percent-of-mri-signals-d...
416•geox•15h ago•179 comments

Mozilla appoints new CEO Anthony Enzor-Demeo

https://blog.mozilla.org/en/mozilla/leadership/mozillas-next-chapter-anthony-enzor-demeo-new-ceo/
457•recvonline•15h ago•714 comments

Show HN: Titan – JavaScript-first framework that compiles into a Rust server

https://www.npmjs.com/package/@ezetgalaxy/titan
14•soham_byte•5d ago•6 comments

Sei AI (YC W22) Is Hiring

https://www.ycombinator.com/companies/sei/jobs/TYbKqi0-llm-engineer-mid-senior
1•ramkumarvenkat•4h ago

VA Linux: The biggest dotcom IPO

https://dfarq.homeip.net/va-linux-the-biggest-dotcom-ipo/
5•giuliomagnifico•5d ago•0 comments

Thin desires are eating life

https://www.joanwestenberg.com/thin-desires-are-eating-your-life/
381•mitchbob•1d ago•149 comments

Tesla Robotaxis in Austin Crash 12.5x More Frequently Than Humans

https://electrek.co/2025/12/15/tesla-reports-another-robotaxi-crash-even-with-supervisor/
98•hjouneau•2h ago•51 comments

Dafny: Verification-Aware Programming Language

https://dafny.org/
41•handfuloflight•6h ago•21 comments

Testing a cheaper laminar flow hood

https://chillphysicsenjoyer.substack.com/p/testing-a-cheaper-laminar-flow-hood
26•surprisetalk•4d ago•6 comments

Japan to revise romanization rules for first time in 70 years

https://www.japantimes.co.jp/news/2025/08/21/japan/panel-hepburn-style-romanization/
146•rgovostes•20h ago•128 comments

Show HN: Learn Japanese contextually while browsing

https://lingoku.ai/learn-japanese
37•englishcat•4h ago•17 comments

Sega Channel: VGHF Recovers over 100 Sega Channel ROMs (and More)

https://gamehistory.org/segachannel/
234•wicket•16h ago•38 comments

The World Happiness Report is beset with methodological problems

https://yaschamounk.substack.com/p/the-world-happiness-report-is-a-sham
98•thatoneengineer•1d ago•116 comments

Nvidia Nemotron 3 Family of Models

https://research.nvidia.com/labs/nemotron/Nemotron-3/
164•ewt-nv•1d ago•30 comments

Writing a blatant Telegram clone using Qt, QML and Rust. And C++

https://kemble.net/blog/provoke/
96•tempodox•13h ago•54 comments

Chat-tails: Throwback terminal chat, built on Tailscale

https://tailscale.com/blog/chat-tails-terminal-chat
67•nulbyte•7h ago•12 comments

Show HN: TheAuditor v2.0 – A ”Flight Computer“ for AI Coding Agents

https://github.com/TheAuditorTool/Auditor
16•ThailandJohn•15h ago•7 comments

Twin suction turbines and 3-Gs in slow corners? Meet the DRG-Lola

https://arstechnica.com/cars/2025/11/an-electric-car-thats-faster-than-f1-around-monaco-thats-the...
8•PaulHoule•5d ago•3 comments

Meta's new A.I. superstars are chafing against the rest of the company

https://www.nytimes.com/2025/12/10/technology/meta-ai-tbd-lab-friction.html
84•furcyd•6d ago•116 comments

Show HN: Sqlit – A lazygit-style TUI for SQL databases

https://github.com/Maxteabag/sqlit
126•MaxTeabag•1d ago•18 comments
Open in hackernews

30 years of <br> tags

https://www.artmann.co/articles/30-years-of-br-tags
162•FragrantRiver•3d ago

Comments

dansjots•2d ago
What an incredible article. More than its impressive documented scope and detail, I love it foremost for conveying what the zeitgeist felt at each point in history. This human element is something usually only passed on by oral tradition and very difficult to capture in cold, academic settings.

It’s fashionable to dunk on “how did all this cloud cruft become the norm”, but seeing a continuous line in history of how circumstance developed upon one another, where each link is individually the most rational decision at their given context, makes them an understandable misfortune of human history.

squimmy26•2d ago
Brilliant article, haven't read an industry retrospective that's as high quality as that for a while.
ksec•2d ago
This is such a great read for those of us who lived through it and it really should be Front Page on HN.

Really wished it added a few things.

>The concept of a web developer as a profession was just starting to form.

Webmaster. That was what we were called. And somehow people were amazed at what we did when 99% of us, as said in the article, really had very very little idea about the web. ( But it was fun )

>The LAMP Stack & Web 2.0

This completely skipped the part about Perl. And Perl was really big, I bet one point in time on the Web most web site were running on Perl. Cpanel, Slashdot, etc. The design of Slashdot is still pretty much the same today as most of the Perl CMS in that era. Soon after every one knew C wouldn't be part of of the Web CGI-BIN Perl took over. We have Perl Script all over the web for people to copy and paste, FTP upload CHMOD before PHP arrives. Many forums at the time were also Perl Script.

Speaking of Slashdot, after that was Digg. That was all before Reddit and HN. I think there used to be something about HN like Fight Club "The first rule of fight club is you do not talk about fight club," And HN in the late 00s or early 10s was simply referred as the orange site by journalist / web reporters.

And then we could probably talk about Digg v4 and dont redesign something if it is working perfectly.

>WordPress, if you wanted a website, you either learned to code or you paid someone who did.

There were a part of CMS war or blogging platform before it was called a blog. There were many, including those using Perl / CGI-BIN. I believe it was Movable Type Vs Wordpress.

And it also missed forums, Ikonboard based on Perl > Invision ( PHP ) vs Vbulltin. Just like CMS/ blog there used to be some Perl vs PHP forum software as well. And of course we all know PHP ultimately won.

>Twitter arrived in 2006 with its 140-character limit and deceptively simple premise. Facebook opened to the public the same year.

Oh I wished they mentioned about MySpace and Friendster. The Social Network before Twitter and Facebook. I believe I have have my original @ksec Twitter handle registered and loss access to it. It has been sitting there for years. Anyone knows how to get it back please ping me. Edit: And I just realised my HN proton email address hasn't been logged in for months for some strange reason.

>JavaScript was still painful, though. Browser inconsistencies were maddening — code that worked in Firefox would break in Internet Explorer 6, and vice versa.

Oh it really missed the most important piece of web era. Firefox Vs IE. Together we pushed Firefox to beyond 30% and in some cases 40% of Browser market share. That is insanely impressive if we consider nearly most of those usage were not from work because Enterprise and Business PCs is still on IE6.

And then Chrome came. And I witness and realise how fast things can change. It was so fast that without all the fans fare of Mozilla people were willingly to download and install Google Chrome. And to this day I have never used Chrome as my main browser. Although it has been a secondary browser since the day it was launched.

>Version control before Git was painful

There was Hg / Mercurial. If anything taking over SVN it should have been Hg. For whatever reason I have always been on the wrong side of history or mainstream. Although that is mostly a personal preference. Pascal over C and later Delphi over Visual C++, Perl over PHP. FreeBSD over Linux. Hg over Git.

>Virtual private servers changed this. You could spin up a server in minutes, resize it on demand, and throw it away when you were done. DigitalOcean launched in 2011 with its simple $5 droplets and friendly interface.

Oh VPS was a thing long before DO. DO was mostly copying Linode from the start. And that is not a bad thing considering Linode at the time was the most developer friendly VPS provider. Taking the crown from I believe Rackspace? Or Rackspace acquired one of those VPS provider before Linode became popular. I cant quite remember.

>Node.js .....Ryan Dahl built it on Chrome's V8 JavaScript engine, and the pitch was simple: JavaScript on the server.

I still think Node.js and Javscript on server is a great idea but wrong execution especially on Node.js NPM. One could argue there is no way we would have known without first trying it and that is certainly true. And it was insanely overhyped in the post Rails Era around 2012 - 2014 because Fail Whales of twitter and Rails couldn't scale. I think the true spirit successor is Bun, integrating everything together very neatly. I just wish I could use something other than Javascirpt. ( On the wrong side of history again I really liked Coffeescript )

>The NoSQL movement was also picking up steam. MongoDB

Oh I remember the over hyped train of NoSQL MongoDB on HN and internet. CoachDB as well. In reality today, SQLite, PlanetScale Postgres / Vitess MySQL or Clickhouse is enough for 99% of use case. ( Or may be I dont know enough NoSQL to judge it usefulness )

>How we worked was changing too. Agile and Scrum had been around since the early 2000s,

Oh the worst part of Agile and Scrum isn't what it did to Tech Industry. It is what it did to companies outside of Tech industry. I dont think most people realise by mid 2010s tech was dominating mainstream media and words like Agile were floating around in many other industries and they all need to be Agile. Especially American companies. Finance companies who were not tech but decided to uses these terms because it was Hip or Cool as part of their KPI along with consultant firms like McKinsey, the Agile movement took over a lot of industry like plague.

This reply is getting too long. But I want to go back to the premise and conclusion of the post,

>I'm incredibly optimistic about the state of web development in 2025....... We also have so many more tools and platforms that make everything easier.

I dont know and I dont think I agree. AI certainly make many steps we do now easier. But conceptually speaking everything is still a bag of hurts, no body is asking why do we need those extra steps in the first place. Dragging something via FTP is still easier. Editing on WYSIWYG Dreamweaver is way more fun. Just like I think Desktop programming should be more Delphi like. In many ways I think WebObject is still ahead of many web frameworks today. Even Vagrant is still easier than we have today. The only good things is that Bun, Rails, HTMX and even HTML / Browser are finally to be swinging back to another ( my preferential ) direction. Safari 26.2 is finally somewhat close to Firefox and Chrome in compatibility.

The final battle left is JPEG XL, or may be AV2 AVIF will prove it is good enough. The web is finally moving in the right direction.

martinky24•2d ago
I really enjoyed reading this, especially as someone who hasn’t done much web front end work!
jaimie•2d ago
This was a very well written retrospective on web development. Thank you for sharing!
1718627440•2d ago
> For that, you needed CGI scripts, which meant learning Perl or C. I tried learning C to write CGI scripts. It was too hard. Hundreds of lines just to grab a query parameter from a URL. The barrier to dynamic content was brutal.

That's folk wisdom, but is it actually true? "Hundreds of lines just to grab a query parameter from a URL."

    /*@null@*/
    /*@only@*/
    char *
    get_param (const char * param)
    {
        const char * query = getenv ("QUERY_STRING");
        if (NULL == query) return NULL;

        char * begin = strstr (query, param);
        if ((NULL == begin) || (begin[strlen (param)] != '=')) return NULL;
        begin += strlen (param) + 1;

        char * end = strchr (begin, '&');
        if (NULL == end) return strdup (begin);

        return strndup (begin, end-begin);
    }
In practice you would probably parse all parameters at once and maybe use a library.

I recently wrote a survey website in pure C. I considered python first, but do to having written a HTML generation library earlier, it was quite a cakewalk in C. I also used the CGI library of my OS, which granted was one of the worst code I ever refactored, but after, it was quite nice. Also SQLite is awesome. In the end I statically linked it, so I got a single binary to upload anywhere. I don't even need to setup a database file, this is done by the program itself. It also could be tested without a webserver, because the CGI library supports passing variables over stdin. Then my program outputs the webpage on stdout.

So my conclusion is: CRUD websites in C are easy and actually a breeze. Maybe that also has my previous conclusion as a prerequisite: HTML represents a tree and string interpolation is the wrong tool to generate a tree description.

flanfly•1d ago
Good showcase. Your code will match the first parameter that has <param> as a suffix, no necessarily <param> exactly (username=blag&name=blub will return blag). It also doesn't handle any percent encoding.
1718627440•1d ago
> Your code will match the first parameter that has <param> as a suffix, no necessarily <param> exactly

Depending on your requirements, that might be a feature.

> It also doesn't handle any percent encoding.

This does literal matches, so yes you would need to pass the param already percent encoded. This is a trade off I did, not for that case, but for similar issues. I don't like non-ASCII in my source code, so I would want to encode this in some way anyway.

But you are right, you shouldn't put this into a generic library. Whether it suffices for your project or not, depends on your requirements.

stouset•7h ago
This exact mindset is why so much software is irreparably broken and riddled with CVEs.

Written standard be damned; I’ll just bang out something that vaguely looks like it handles the main cases I can remember off the top of my head. What could go wrong?

1718627440•6h ago
Most commenters seem to miss that this is the throwaway code for HN, with a maximum allocated time of five minutes. I wouldn't commit it like this. The final code did cope with percent-encoding even though the project didn't took any user generated values at all. And I did read the RFCs, which honestly most developers I meet don't care to do. I also made sure the percent-decodation function did not rely on the ASCII ordering (it only relies on A-Z being continuous), because of portability (EBCDIC) and I have some professional honor.
bruce343434•5h ago
I get that, but your initial comment implied you were about to showcase a counter to "Hundreds of lines just to grab a query parameter from a URL", but instead you showed "Poorly and incompletely parsing a single parameter can be done in less than 100 lines".

You said you allocated 5 minutes max to this snippet, well in php this would be 5 seconds and 1 line. And it would be a proper solution.

    $name = $_GET['name'] ?? SOME_DEFAULT;
1718627440•4h ago
And in the code in C it looks like this, which is also a proper solution, I did not measure the time, it took me to write that.

    name = cgiGetValue (cgi, "name");
    if (!name) name = SOME_DEFAULT;
If you allow for GCC extensions, it looks like this:

    name = cgiGetValue (cgi, "name") ?: SOME_DEFAULT;
shakna•2h ago
That would fail on a user supplying a multiple where you don't expect.

> If multiple fields are used (i.e. a variable that may contain several values) the value returned contains all these values concatenated together with a newline character as separator.

recursive•6h ago
Ampersands are ASCII, but also need to be encoded to be in a parameter value.
1718627440•6h ago
Yeah, but you can totally choose to not allow that in your software.
recursive•5h ago
That's true. Your argument about how short parameter extraction can be gets a little weaker though if only solve it for the easy cases. Code can be shorter if it solves a simplified version of the problem statement.
stouset•7h ago
Further, when retrieving multiple parameters, you have a Shlemiel-the-painter algorithm.

https://www.joelonsoftware.com/2001/12/11/back-to-basics/

1718627440•6h ago
Thanks, good author. I also like to read him. Honestly not parsing the whole query string at once feels kind of dumb. To quote myself:

> In practice you would probably parse all parameters at once and maybe use a library.

bryanlarsen•7h ago
> HTML represents a tree and string interpolation is the wrong tool to generate a tree description.

Yet 30 years later it feels like string interpolation is the most common tool. It probably isn't, but still surprisingly common.

1718627440•6h ago
Which is really sad. This is the actual reason why I preferred C over Python[*] for that project, so I could use my own library for HTML generation, which does exactly that. It also ameliorates the `goto cleanup;` thing, since now you can just tell the library to throw subtrees away. And the best thing is, that you can MOVE, and COPY them, which means you can generate code once and then fill it with the data and still later modify it. This means you can also refer to earlier generated values to generate something else, without needing to store everything twice or reparse your own output.

[*] I mean yeah, I could have written a wrapper, but that would have taken far more time.

toast0•5h ago
The thing is, the browser needs the tree, but the server doesn't really need the whole tree.

Building the tree on the server is usually wasted work. Not a lot of tree oriented output as you make it libraries.

1718627440•4h ago
My point is that treating it as the tree it is, is the only way to really make it impossible to produce invalid HTML. You could also actually validate not just syntax, but also semantic.

> Not a lot of tree oriented output as you make it libraries.

That was actually the point of my library, although I must admit, I haven't implemented actually streaming the HTML output out, before having composed the whole tree. It isn't actually that complicated, what I would need to implement would be to make part of the tree immutable, so that the HTML for it can already be generated.

1718627440•2d ago
> Every page on your site needed the same header, the same navigation, the same footer. But there was no way to share these elements. No includes, no components.

That's not completely true. Webservers have Server Side Includes (SSI) [0]. Also if you don't want to rely on that, 'cat header body > file' isn't really that hard.

[0] https://web.archive.org/web/19970303194503/http://hoohoo.ncs...

Gualdrapo•7h ago
I think they meant that from a vanilla HTML standpoint
alehlopeh•7h ago
HTML frames let you do this way back in the day
pimlottc•7h ago
The article mentions that in the very next sentence

> You either copied and pasted your header into every single HTML file (and god help you if you needed to change it), or you used <iframe> to embed shared elements. Neither option was great.

alehlopeh•6h ago
I’m talking about the frameset and frame tags, not iframes.
pimlottc•1h ago
Ah, okay, you’re right, it’s been a long while since I used those tags…
bigstrat2003•7h ago
Sure, but later in the article it says that when PHP came out it solved the problem of not being able to do includes. Which again... server-side includes predate PHP. I think that this is just an error in the article any way you slice it. I assume it was just an oversight, as the author has been around long enough that he almost certainly knows about SSI.
rzzzt•5h ago
PHP's initial release announcement mentions includes as a feature that can be used even if the server does not have SSI support: https://groups.google.com/g/comp.infosystems.www.authoring.c...
1718627440•4h ago
Does it, other than using PHP? To me it sounds like that feature to use instead of SSI is PHP.
1718627440•5h ago
If they insist on only using vanilla HTML then the problem is unsolved to this day. I think it is actually less solved now, since back then HTML was an SGML application, so you could supply another DTD and have macro-expansion on the client.
mixmastamyk•4h ago
Object tag can do it. iframe also with limitations.
1718627440•3h ago
Does it really? I think, this makes you have a wrapper and I am not sure if you can get rid of all issues with "display: contents". Also you are already in the body, so you can't change the head, which makes it useless for the most idiomatic usecase for that feature.
mixmastamyk•47m ago
Gets you header, footer, components. Most of head would be nice but you typically want a custom title for example.
tannhaeuser•6h ago
HTML was invented as an SGML vocabulary, and SGML and thus also XML has entities/text macros you can use to reference shared documents or fragments such as shared headers, footers, and site nav, among other things.
1718627440•3h ago
Not sure, why you are getting downvoted, as that was pretty much the case before HTML5.
ripe•1d ago
What a comprehensive, well-written article. Well done!

The author traces the evolution of web technology from Notepad-edited HTML to today.

My biggest difference with the author is that he is optimistic about web development, while all I see is shaky tower of workarounds upon workarounds.

My take is that the web technology tower is built on the quicksand of an out-of-control web standardization process that has been captured by a small cabal of browser vendors. Every single step of history that this article mentions is built to paper over some serious problems instead of solving them, creating an even bigger ball of wax. The latest step is generative AI tools that work around the crap by automatically generating code.

This tower is the very opposite of simple and it's bound to collapse. I cannot predict when or how.

jasperry•4h ago
I was also impressed and read the whole thing and got a lot of gaps filled in my history-of-the-web knowledge. And I also agree that the uncritical optimism is the weak point; the article seems put together like a just-so story about how things are bound to keep getting more and more wonderful.

But I don't agree that the system is bound to collapse. Rather, as I read the article, I got this mental image of the web of networked software+hardware as some kind of giant, evolving, self-modifying organism, and the creepy thing isn't the possibility of collapse, but that, as humans play with their individual lego bricks and exercise their limited abilities to coordinate, through this evolutionary process a very big "something" is taking shape that isn't a product of conscious human intention. It's not just about the potential for individual superhuman AIs, but about what emerges from the whole ball of mud as people work to make it more structured and interconnected.

samgranieri•14h ago
Wow. This is an incredible article that tracks just about everything I’ve done with the web over the past 30 years. I started with BbEdit and Adobe PageMill, then went to dreamweaver for Lamp.
recallingmemory•7h ago
Enjoyable read and a nostalgic trip all the way back to when I learned how to create websites on Geocities. Thanks for this.
Kuyawa•7h ago
> All I needed was Notepad, some HTML, and an FTP client to upload my files

That's what I still do 30 years later

brianwawok•6h ago
Does your site resemble the ux of hacker news and craigslist?
notatallshaw•7h ago
> At one company I worked at, we had a system where each deploy got its own folder, and we'd update a symlink to point to the active one. It worked, but it was all manual, all custom, and all fragile.

The first time I saw this I thought it was one of the most elegant solutions I'd ever seen working in technology. Safe to deploy the files, atomic switch over per machine, and trivial to rollback.

It may have been manual, but I'd worked with a deployment processes that involved manually copying files to dozens of boxes and following 10 to 20 step process of manual commands on each box. Even when I first got to use automated deployment tooling in the company I worked at it was fragile, opaque and a configuration nightmare, built primarily for OS installation of new servers and being forced to work with applications.

toast0•5h ago
> It may have been manual

It's pretty easy to automate a system that pushes directories and changes symlinks. I've used and built automation around the basic pattern.

shimms•5h ago
It’s been a while (a decade?!) but if I recall correctly Capistrano did this for rails deployments too, didn’t it?
AznHisoka•3h ago
I am now feeling old for using Capistrano even today. I think there might be “cooler and newer” ways to deploy, but i never ever felt the need to learn what those ways are since Capistrano gets the job done.
thunderbong•2h ago
Not just rails. Capistrano is tech stack agnostic. It's possible to deploy a project with nodejs using Capistrano.

And yes, it's truly elegant.

Rollbacks become trivial should you need it.

outofmyshed•7h ago
This is a great overview of web tech as I more or less recall it. Although pre-PHP CGI wasn’t a big deal, but it was more fiddly and you had to know and understand Apache, broadly. mod_perl & FastCGI made it okay. Only masochists wrote CGI apps in compiled languages. PHP made making screwy web apps low-effort and fun.

I bugged out of front-end dev just before jquery took off.

rsync•7h ago
"Virtual private servers changed this. You could spin up a server in minutes, resize it on demand, and throw it away when you were done. DigitalOcean launched in 2011 ..."

The first VPS provider, circa fall of 2001, was "JohnCompanies" handing out FreeBSD jails advertised on metafilter (and later, kuro5hin).

These VPS customers needed backup. They wanted the backup to be in a different location. They preferred to use rsync.

Four years later I registered the domain "rsync.net"[1].

[1] I asked permission of rsync/samba authors.

GoatOfAplomb•6h ago
Fantastic read. I did most of my web development between 1998 and 2012. Reading this gave me both a trip down memory lane and a very digestible summary of what I've missed since then.
PaulDavisThe1st•6h ago
I wanted this to end with something like:

"... and through it all, the humble <br> tag has continued playing its role ..."

tehjoker•6h ago
I predict this will be an instant classic article. It concisely contains most of the history a lot of newer hands are missing.
1970-01-01•5h ago
Very nice article.

However, some very heavy firepower was glossed over.. TLS/HTTPS gave us the power to actually buy things and share secrets. The WWW would not be anywhere near this level of commercialized if we didn't have that in place.

emilbratt•5h ago
Im only half way through, but just wanted to share that I love this kind of write up
cjstewart88•4h ago
Thanks for writing this :)
davidpronk•4h ago
Great read. I have fond memories of all the tricks we used to amaze visitors and fellow developers. Like using fonts that weren't installed on the visitors' computer using sIFR. https://mikeindustries.com/blog/sifr
racl101•3h ago
Good ol' br tag. Saves me from having to write padding and margin CSS.
mikeyinternews•1h ago
<br/> *