frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

I want everything local – Building my offline AI workspace

https://instavm.io/blog/building-my-offline-ai-workspace
731•mkagenius•15h ago•196 comments

Tribblix – The Retro Illumos Distribution

http://www.tribblix.org/
35•bilegeek•2h ago•2 comments

What the windsurf sale means for the AI coding ecosystem

https://ethanding.substack.com/p/windsurf-gets-margin-called
100•whoami_nr•5h ago•23 comments

I bought a £16 smartwatch just because it used USB-C

https://shkspr.mobi/blog/2025/08/i-bought-a-16-smartwatch-just-because-it-used-usb-c/
157•blenderob•2d ago•73 comments

Breaking the Sorting Barrier for Directed Single-Source Shortest Paths

https://arxiv.org/abs/2504.17033
47•pentestercrab•3h ago•1 comments

Ultrathin business card runs a fluid simulation

https://github.com/Nicholas-L-Johnson/flip-card
977•wompapumpum•21h ago•197 comments

Sandstorm- self-hostable web productivity suite

https://sandstorm.org/
33•nalinidash•3h ago•9 comments

Partially Matching Zig Enums

https://matklad.github.io/2025/08/08/partially-matching-zig-enums.html
4•ingve•32m ago•0 comments

Engineer restores pay phones for free public use

https://www.npr.org/2025/08/04/nx-s1-5484013/engineer-restores-pay-phones-for-free-public-use
110•andsoitis•3d ago•37 comments

Tor: How a military project became a lifeline for privacy

https://thereader.mitpress.mit.edu/the-secret-history-of-tor-how-a-military-project-became-a-lifeline-for-privacy/
317•anarbadalov•17h ago•150 comments

Car has more than 1.2M km on it – and it's still going strong

https://www.cbc.ca/news/canada/nova-scotia/1985-toyota-tercel-high-mileage-1.7597168
11•Sgt_Apone•3d ago•6 comments

Representing Python notebooks as dataflow graphs

https://marimo.io/blog/dataflow
20•akshayka•3d ago•0 comments

A SPARC makes a little fire

https://www.leadedsolder.com/2025/08/05/sparcstation-scsi-termination-fix-magic-smoke.html
17•zdw•3d ago•0 comments

Getting good results from Claude Code

https://www.dzombak.com/blog/2025/08/getting-good-results-from-claude-code/
347•ingve•19h ago•138 comments

Efrit: A native elisp coding agent running in Emacs

https://github.com/steveyegge/efrit
120•simonpure•14h ago•18 comments

How we replaced Elasticsearch and MongoDB with Rust and RocksDB

https://radar.com/blog/high-performance-geocoding-in-rust
237•j_kao•20h ago•63 comments

Hacking Diffusion into Qwen3 for the Arc Challenge

https://www.matthewnewton.com/blog/arc-challenge-diffusion
86•mattnewton•3d ago•7 comments

Jim Lovell, Apollo 13 commander, has died

https://www.nasa.gov/news-release/acting-nasa-administrator-reflects-on-legacy-of-astronaut-jim-lovell/
496•LorenDB•14h ago•100 comments

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

412•superasn•13h ago•273 comments

Astronomy Photographer of the Year 2025 shortlist

https://www.rmg.co.uk/whats-on/astronomy-photographer-year/galleries/2025-shortlist
208•speckx•18h ago•31 comments

Let's properly analyze an AI article for once

https://nibblestew.blogspot.com/2025/08/lets-properly-analyze-ai-article-for.html
63•pabs3•6h ago•38 comments

Our European search index goes live

https://blog.ecosia.org/launching-our-european-search-index/
51•maelito•12h ago•7 comments

Unmasking the Sea Star Killer

https://www.biographic.com/unmasking-the-sea-star-killer/
62•sohkamyung•3d ago•11 comments

Window Activation

https://blog.broulik.de/2025/08/on-window-activation/
200•LorenDB•4d ago•114 comments

How to safely escape JSON inside HTML SCRIPT elements

https://sirre.al/2025/08/06/safe-json-in-script-tags-how-not-to-break-a-site/
36•dmsnell•10h ago•18 comments

The surprise deprecation of GPT-4o for ChatGPT consumers

https://simonwillison.net/2025/Aug/8/surprise-deprecation-of-gpt-4o/
359•tosh•15h ago•346 comments

Debugging a mysterious HTTP streaming issue

https://mintlify.com/blog/debugging-a-mysterious-http-streaming-issue-when-cloudflare-compression-breaks-everything
10•skeptrune•3d ago•4 comments

Fire hazard of WHY2025 badge due to 18650 Li-Ion cells

https://wiki.why2025.org/Badge/Fire_hazard
95•fjfaase•3d ago•87 comments

Build durable workflows with Postgres

https://www.dbos.dev/blog/why-postgres-durable-execution
130•KraftyOne•13h ago•47 comments

A robust, open-source framework for Spiking Neural Networks on low-end FPGAs

https://arxiv.org/abs/2507.07284
60•PaulHoule•4d ago•5 comments
Open in hackernews

Let's properly analyze an AI article for once

https://nibblestew.blogspot.com/2025/08/lets-properly-analyze-ai-article-for.html
63•pabs3•6h ago

Comments

righthand•3h ago
This was excellent!
croes•3h ago
The statistics part will also be relevant for the rest of Trump‘s presidency
ma73me•2h ago
I'll never judge an article by its HN header again
vunderba•2h ago
Spot on critical analysis of the blog post "Developers reinvented" by Github Thomas Dohmke which includes such quotes as:

> Many Computer Science (CS) programs still center around problems that AI can now solve competently.

Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]

Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.

[1] https://x.com/mxcl/status/608682016205344768

charcircuit•2h ago
>not your ability to invert a binary tree on a whiteboard.

Knowing how to swap 2 variables and traverse data structures are fundamentals.

kubb•54m ago
I’m surprised that the creator of Homebrew didn’t know how to do that.
meindnoch•47m ago
If you spend enough time with Homebrew, it's actually not that surprising.
josephg•27m ago
Of course, lots of people are employed despite giant holes in their knowledge of CS fundamentals. There’s more to being an effective developer than having good fundamentals. A lot more.

But there’s still a lot of very important concepts in CS that people should learn. Concepts like performance engineering, security analysis, reliability, data structures and algorithms. And enough knowledge of how the layers below your program works that you can understand how your program runs and write code which lives in harmony with the system.

This knowledge is way more useful than a lot of people claim. Especially in an era of chatgpt.

If you’re weak on this stuff, you can easily be a liability to your team. If your whole team is weak on this stuff, you’ll collectively write terrible software.

thrown-0825•1h ago
Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

You can teach fundamentals all day long, but on their first day of work they are going to be asked adhere to some internal corporate process that is so far removed from their academic experience that they will feel like they should have just taught themselves online.

meindnoch•49m ago
>Computer Science in academia is pretty out of line with a lot of skills that are actually used on a daily basis by professional software developers.

80% of software development boils down to:

1. Get JSON(s) from API(s)

2. Read some fields from each JSON

3. Create a new JSON

4. Send it to other API(s)

Eventually people stopped pretending that you need a CS degree for this, and it spawned the coding bootcamp phenomenon. Alas it was short-lived, because ZIRP was killed, and as of late, we realized we don't even need humans for this kind of work!

thrown-0825•31m ago
And they were right.

We no longer hires junior engineers because it just wasn't worth the time to train them anymore.

crinkly•39m ago
Depends what you do. The sudden large accumulation of layers and corporate SaaS crap in the industry since about 2001 you're right on. But those of us a bit further down the stack or before that, it's pretty useful still.
thrown-0825•27m ago
Absolutely, but we are a dying breed in the same way that nobody really knows how to build nuclear power plants anymore.

Most CS grads will end up in a position that has more in common with being an electrician or plumber than an electrical engineer, difference is that we can't really automate installing wires and pipes to same degree we have automated service integration and making api calls.

crinkly•6m ago
Not a dying breed. There is just a relatively static demand. Proportionally it looks worse because the rest of the industry has grown massively.

Really the problem is there are too many CS grads. There should be a software engineering degree.

saagarjha•25m ago
I don't get this viewpoint. Yes, of course when you start at your job you will have to learn how JIRA works or how to write a design doc. Obviously nobody is going to teach you that in college. But nobody is going to teach you that online either!
thrown-0825•20m ago
How about performing git bisect to identify when a regression was introduced, or debugging a docker container that is failing to start in your local environment, writing some unit tests for a CI suite, merging a pull request that has some conflicts, etc etc etc.

These a just a couple of examples of things that I see juniors really struggle with that are day 1 basics of the profession that are consistently missed by interview processes that focus on academic knowledge.

People won't teach you how to solve these problems online, but you will learn how to solve them while teaching yourself.

saagarjha•19m ago
Yes, and I did plenty of that during my university education. Except Docker because at the time I refused to use Docker.
thrown-0825•13m ago
Great, and if you got a job with us I would be having to explain how docker works because you refused to learn it for some reason.

Point is that what is deemed important in academic circle is rarely important in practice, and when it is I find it easier to explain a theory or algorithm than teach a developer how to use an industry standard tool set.

We should be training devs like welders and plumbers instead of like mathematicians because practically speaking the vast majority of them will never use that knowledge and develop an entirely new skill set the day they graduate.

saagarjha•10m ago
I use standard algorithms all the time. Sometimes I have to come up with new ones. And that's not just when I'm working on performance-sensitive roles.

Also, btw, I did eventually learn how to use Docker. I did actually vaguely know how it worked for a while but I didn't want Linux VM anywhere near my computer, but eventually I capitulated provided I didn't have Linux VM running all the time.

janalsncm•11m ago
A weaker version of your argument that might be more popular here involves math requirements.

I had to take calculus and while I think it’s good at teaching problem solving, that’s probably the best thing I can say about it. Statistics, which was not required, would also check that box and is far more applicable on a regular basis.

Yes calculus is involved in machine learning research that some PhDs will do, but heck, so is statistics.

Gigachad•17m ago
Schools still make you manually understand math even though calculators have been perfect for decades. Because it turns out having some magic machine spit out an answer you can’t understand isn’t good and you’ll have no ability to understand when and why the answer is incorrect.
sixhobbits•2h ago
> The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278

I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

The bit about running the study multiple times also seems kinda random.

I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.

foma-roje•2h ago
> You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.

Certainly? Now, who is ranting?

brabel•1h ago
It’s a basic property of statistics. You need an extremely varied population to need a sample of 25%, and almost no human population is that varied in practice. Humans are actually very uniform in fact.
astrobe_•2h ago
> The bit about running the study multiple times also seems kinda random.

Reproducibility? But knowing it comes from the CEO of Github, who has vested interests in that matter because AI is one of the things that will allow to maintain Github's position on the market (or increase revenue of their paid plans, once everyone is hooked on vibe coding etc.), anyone would anyway take it with a grain of salt. It's like studies funded by big pharma.

pcwelder•2h ago
>I found, a required sample size for just one thousand people would be 278

It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.

I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.

While bad, if done properly, it may still be insightful.

NoahZuniga•1h ago
It's a bit annoying that this article on AI is hallucinating itself:

> To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself.

He never said this. This is just false, and it seems like the author didn't even fact check if Hayao Miyazaki ever said this.

tough•1h ago
Right it was an out of context presentation of their own company and some very rough AI generated content and years ago before the existence of LLMs

but yeah sensationalism and all and people don't do research so unless you remember well

and also lost in translation from Japanese to English, the work sampled by their engineers, it depicted some kinda of zombie like pictures in a very rough form, thus the -insult to life- as in literally

meindnoch•35m ago
Context: https://youtu.be/ngZ0K3lWKRc

Miyazaki is repulsed by an AI-trained zombie animation which reminded him of a friend with disabilities. So the oft quoted part is about that zombie animation.

When he the team tells him that they want to build a machine that can draw pictures like humans do, he doesn't say anything just stares.

vemv•1h ago
> Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

Spot on, I think this every time I see AI art on my Linkedin feed.

Gigachad•12m ago
I’ve become super sensitive to spotting it now. When I see a restaurant using AI food pictures I don’t want to eat there. Why would I want to do business with people who are so dishonest to lie about the basics?
thrown-0825•3m ago
using food pics as an example is hilarious, food photos in advertising have been "faked" for about as long as photography has existed
bgwalter•43m ago
"AI" could certainly replace Dohmke. It excels at writing such meaningless articles.
rsynnott•22m ago
> Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.

Gigachad•15m ago
They always market the % of lines generated by AI. But if you are forced to use a tool that constantly inserts generations, that number is always going to be high even if the actual benefit is nil or negative.

If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.

diggan•3m ago
> They always market the % of lines generated by AI

That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)

thrown-0825•5m ago
All of the crypto grifters have shifted to AI.

Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.

crinkly•16m ago
Professional statistician here. Not that I get to do any of that these days, bar read Significance magazine and get angry occasionally.

Looking at the original blog post, it's marketing copy so there's no point in even reading it. The conclusion is in the headline and the methodology is starting with what you want to say and working back to supporting information. If it was in a more academic setting it would be the equivalent of doing a meta-analysis and p-hacking your way to the pre-defined conclusion you wanted.

Applying any kind of rigour to it is pointless but thanks for the effort.