frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nobody knows how to build with AI yet

https://worksonmymachine.substack.com/p/nobody-knows-how-to-build-with-ai
55•Stwerner•49m ago•22 comments

Known Bad Email Clients

https://www.emailprivacytester.com/badClients
21•mike-cardwell•35m ago•12 comments

Linux and Secure Boot certificate expiration

https://lwn.net/SubscriberLink/1029767/43b62a7a7408c2a9/
92•todsacerdoti•8h ago•41 comments

My Self-Hosting Setup

https://codecaptured.com/blog/my-ultimate-self-hosting-setup/
409•mirdaki•13h ago•150 comments

Fstrings.wtf

https://fstrings.wtf/
252•darkamaul•5h ago•69 comments

Hyatt Hotels are using algorithmic Rest “smoking detectors”

https://twitter.com/_ZachGriff/status/1945959030851035223
390•RebeccaTheDev•12h ago•218 comments

Babies made using three people's DNA are born free of mitochondrial disease

https://www.bbc.com/news/articles/cn8179z199vo
96•1659447091•2d ago•48 comments

Valve confirms credit card companies pressured it to delist certain adult games

https://www.pcgamer.com/software/platforms/valve-confirms-credit-card-companies-pressured-it-to-delist-certain-adult-games-from-steam/
728•freedomben•1d ago•706 comments

OpenAI claims Gold-medal performance at IMO 2025

https://twitter.com/alexwei_/status/1946477742855532918
163•Davidzheng•7h ago•245 comments

Pimping My Casio: Part Deux

https://blog.jgc.org/2025/07/pimping-my-casio-part-deux.html
111•r4um•8h ago•30 comments

A 14kb page can load much faster than a 15kb page (2022)

https://endtimes.dev/why-your-website-should-be-under-14kb-in-size/
337•truxs•8h ago•229 comments

Piramidal (YC W24) Is Hiring a Full Stack Engineer

https://www.ycombinator.com/companies/piramidal/jobs/JfeI3uE-full-stack-engineer
1•dsacellarius•4h ago

I avoid using LLMs as a publisher and writer

https://lifehacky.net/prompt-0b953c089b44
133•tombarys•5h ago•81 comments

What is the richest country in 2025?

https://www.economist.com/graphic-detail/2025/07/18/what-is-the-richest-country-in-the-world-in-2025
14•RestlessMind•48m ago•5 comments

YouTube No Translation

https://addons.mozilla.org/en-US/firefox/addon/youtube-no-translation/
118•thefox•8h ago•56 comments

Advertising without signal: The rise of the grifter equilibrium

https://www.gojiberries.io/advertising-without-signal-whe-amazon-ads-confuse-more-than-they-clarify/
133•neehao•14h ago•57 comments

How to write Rust in the Linux kernel: part 3

https://lwn.net/SubscriberLink/1026694/3413f4b43c862629/
230•chmaynard•18h ago•17 comments

Asynchrony is not concurrency

https://kristoff.it/blog/asynchrony-is-not-concurrency/
276•kristoff_it•21h ago•197 comments

Meta says it won’t sign Europe AI agreement, calling it an overreach

https://www.cnbc.com/2025/07/18/meta-europe-ai-code.html
289•rntn•22h ago•389 comments

Astronomers use colors of trans-Neptunian objects to track ancient stellar flyby

https://phys.org/news/2025-07-astronomers-trans-neptunian-track-ancient.html
12•bikenaga•3d ago•4 comments

N78 band 5G NR recordings

https://destevez.net/2025/07/n78-band-5g-nr-recordings/
13•Nokinside•2d ago•0 comments

A CarFax for Used PCs: Hewlett Packard wants to give old laptops new life

https://spectrum.ieee.org/carfax-used-pcs
22•miles•3d ago•20 comments

Debcraft – Easiest way to modify and build Debian packages

https://optimizedbyotto.com/post/debcraft-easy-debian-packaging/
70•pabs3•16h ago•22 comments

An exponential improvement for Ramsey lower bounds

https://arxiv.org/abs/2507.12926
18•IdealeZahlen•6h ago•1 comments

Zig Interface Revisited

https://williamw520.github.io/2025/07/13/zig-interface-revisited.html
10•ww520•2d ago•1 comments

Mr Browser – Macintosh Repository file downloader that runs directly on 68k Macs

https://www.macintoshrepository.org/44146-mr-browser
80•zdw•16h ago•18 comments

Bun adds pnpm-style isolated installation mode

https://github.com/oven-sh/bun/pull/20440
97•nateb2022•15h ago•15 comments

Broadcom to discontinue free Bitnami Helm charts

https://github.com/bitnami/charts/issues/35164
202•mmoogle•21h ago•108 comments

Silence Is a Commons by Ivan Illich (1983)

http://www.davidtinapple.com/illich/1983_silence_commons.html
178•entaloneralie•19h ago•45 comments

Zig's New Writer

https://www.openmymind.net/Zigs-New-Writer/
90•Bogdanp•2d ago•13 comments
Open in hackernews

GPT-5-reasoning alpha found in the wild

https://twitter.com/btibor91/status/1946532308896628748
56•dejavucoder•4h ago

Comments

anonzzzies•4h ago
Look at those people shouting this will be AGI / total disruption etc. Seems Elon managed one thing; to amass the dumbest folks together. 99.99% maga, crypto and almost markov chain quality comments.
ImHereToVote•4h ago
Maybe this won't be. How long do you think a machine will be able to outdo any human in any given domain? I personally think it will be after they are able to rewrite their own code. You?
owebmaster•4h ago
> I personally think it will be after they are able to rewrite their own code.

My threshold is when it can create a new Google

ImHereToVote•3h ago
Why not putting it earlier than that. Why not starting and running it's own LLC. I would think when that LLC is bigger than Google it might already be obvious.
kasey_junk•4h ago
They write their own code now so how long will it be?
ImHereToVote•3h ago
They "can" write parts of it. But they can't rewire the weights. They are learned not coded.
Fade_Dance•3h ago
Seems like this will be one of the areas that will improve with multi-agentic AI, where groups of agents can operate via consensus, check/test outputs, manage from a higher meta level, etc. Not that any of that would be "magic" but the advantages of expanding laterally to that approach seem fairly obvious when it comes to software development.

So in my eyes actually think it's probably more to do with reducing the cost of AI inference by another order of magnitude, at least when it comes to mass market tools. Existing basic code-generation tools from a single AI are already fairly expensive to run compute wise.

elif•4h ago
Your contrary certainty has the same humorously over-confident tone.
perching_aix•3h ago
Which is how and why these political strategies work so well.
thm•3h ago
99% of AI influencers are the same people who emailed you pictures as a Word attachment a year ago.
torginus•3h ago
This is what put me off Claude Code. When I wanted. To dig in, I tried to watch a few Youtube videos to see an expert's opinion it, and 90% people who talk about it feel like former crypto shills, who, from their channel history, seem like have never written a single line of code without AI in their lives.
plemer•3h ago
I get it, but have you reviewed high-quality sources or actually tried the product?

Association fallacy: “You know who else was a vegetarian? Hitler.”

haneul•3h ago
As someone who doesn't keep track of the influencer scene at the moment because I am way addicted to building...

You should totally give Claude Code a try. The biggest problem is that it is glaze-optimized, so have to work at getting it to not treat you like the biggest genius of all time. But when you manage to get in a good flow with it, and your project is very predictably searchable, results start to be quite helpful, even if just to unstuck yourself when you're in a rut.

reactordev•3h ago
This. Claude Code was the only one to be able to grok my 20 year old C++ codebase so that I could update things deep in it's bowels to make it compile because I neglected it on a thumb drive for 15 years. I had no mental model of what was going on. Claude built one in a few minutes.
jug•3h ago
It annoys me to experience the huge discrepancy between content on social media on AI versus actual enterprise use. AI is happening, it's absolutely becoming integral parts of many businesses including our own. But these guys are just doodling in MS Paint and they're flooding the channels.
reactordev•3h ago
Enterprises are in the same situation as you are. Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.

I can count on my hands the number of enterprises that actually have AI models of their own.

bdangubic•3h ago
just curious, why does an enterprise have to have their own model? company can use ____ (someone else’s model) and still accomplish amazing AI shit in their products
reactordev•3h ago
Because data protection and privacy compliance hasn’t caught up yet.
jgalt212•2h ago
or given up under the unceasing pressure from the AI madness.
garciasn•2h ago
Because I am not permitted to share my code nor client information/data with unapproved third parties; it’s a contractual obligation. So; we train our own models to do those things.

I use Claude Code for building products that don’t have these limitations. And fuck is it amazing. Even little things that would have taken days are done in a single line of text.

rvz•2h ago
> Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.

And somehow these companies are now "AI companies", just like in the 2010s your average food market down the street was a "tech company" or the bakery next to it is now a "blockchain company". This happens all the time with bubbles and mania.

These enterprises today appear even more confused about what they do to rebrand themselves and it's a sign they are desperate for survival.

anonzzzies•3h ago
Claude code is good though: no need to watch influencers for that. Or ever.
sorokod•3h ago
Golgafrincham Ark B, material.

https://hitchhikers.fandom.com/wiki/Golgafrincham

threatripper•3h ago
We have to wait and test it ourselves to see how far it gets in our daily tasks. If the improvement continues like it did in the past, that would be pretty far. Not quite a full researcher position but an average student assistant for sure.
shiandow•3h ago
I'll believe in AGI when OpenAI stops paying human developers.
brookst•3h ago
I don’t see how this follows. Does AGI mean that it is free to operate and has no hardware / power constraints?

The fact that I see people being paid to dig a trench does not make me doubt the existence of trenching machines. It just means that the tool is not always the best choice for every job.

rvz•2h ago
> Does AGI mean that it is free to operate and has no hardware / power constraints?

It is that and an autonomous system that can generate $100BN dollars in profits. (OpenAI and Microsoft's definition of AGI)

So maybe when we see a commercial airplane with no human pilots on board but an LLM piloting the plane with no intervention needed?

Would you board such a plane?

graycat•33m ago
AGI???? Again, once again, over again, yet again, one more time:

(1) Given triangle ABC, by means of Euclidean construction find point D on line AB and point E on line BC so that the lengths |AD| = |DE| = |EC.

(2) Given triangle ABC, by means of Euclidean construction inscribe a square so that each corner of the square is on a side of the triangle.

Come ON AGI, let's have some RESULTS that human general intelligence can do -- gee, I solved (1) in the 10th grade.

ogogmad•4h ago
In related news, OpenAI and Google have announced that their latest non-public models have received Gold in the International Mathematics Olympiad: https://news.ycombinator.com/item?id=44614872

That said, the public models don't even get bronze.

[EDIT] Dupe of this: https://news.ycombinator.com/item?id=44614872

johnecheck•3h ago
Wow. That's an impressive result, though we definitely need some more details on how it was achieved.

What techniques were used? He references scaling up test-time compute, so I have to assume they threw a boatload of money at this. I've heard talk of running models in parallel and comparing results - if OpenAI ran this 10000 times in parallel and cherry-picked the best one, this is a lot less exciting.

If this is legit, then I really want to know what tools were used and how the model used them.

badgersnake•3h ago
> If this is legit

Indeed.

chvid•4h ago
What is this? Guerilla marketing from a 300B startup?
bgwalter•4h ago
None of the X enthusiasts has even seen a benchmark or used the thing, but we're glad to know that Duke Nukem Forever will be released soon.
mjburgess•4h ago
It's strange that none of these $100s bn+ companies fund empirical research into the effects of AI tools on actual job roles as part of their "benchmarks". Oh wait, no its not.
brookst•3h ago
Agree, it would be bizarre if they did.
mjburgess•2h ago
It would be bizarre if they benchmarked the models based on actual task performance?
bawana•3h ago
Well i asked chatGPT IF i could run kimik2 on a 5800x 3d with 64 gigs of ram with a 3090 and it said:

Yes, you absolutely can run Kimi-K2-Instruct on a PC with:

:white_check_mark: CPU: AMD Ryzen 7 5800X3D :white_check_mark: GPU: NVIDIA RTX 3090 (24 GB VRAM) :white_check_mark: RAM: 64 GB system memory This is more than sufficient for both:

Loading and running the full Kimi-K2-Instruct model in FP16 or INT8, and Quantizing it with weight-only INT8 using Hugging Face Optimum + bitsandbytes.

Kimi k2 has a trillion parameters and even an 8 bit quant would need half a gig of system ram +vram

This is with the free chatGPT that us peasants use. I dont have the means to run grok4 heavy, deep seek or kimi k2 to ask them.

I cant wait to see what accidental wars will start when we put ai in the kill chain

ogogmad•3h ago
Maybe you should use a reasoning model. Got this from O3, which took 1m31s to think about the answer: https://chatgpt.com/s/t_687b9221fb748191af4e30f597f18443

Bottom line: Your 5800X3D + 64 GB RAM + RTX 3090 will run Kimi K2’s 1.8‑bit build, but response times feel more like a leisurely typewriter than a snappy chatbot. If you want comfortable day‑to‑day use, plan either a RAM upgrade or a second (or bigger) GPU—or just hit the Moonshot API and save some waiting.

threatripper•3h ago
I second this. o3 is pretty spot on while 4o answered exactly like what the parent got.

I rarely use 4o anymore for anything. Rather would I wait for o3 than quickly get a pile of rubbish.

brookst•3h ago
4o is great for simple lookup and compute tasks; stuff like “scale this recipe to feed 12” or “what US wineries survived prohibition”.

o3 all the way for anything needing analysis or creative thought.

jug•3h ago
These cases are probably why OpenAI has stated GPT-4.1 is their last non reasoning model and GPT-5 will determine the need for and how much to reason based on the query.
dyl000•3h ago
cant wait for how mid this is going to be.
m3kw9•2h ago
Yeah this is as big of a news as iPhone 18 in the pipeline.
pjs_•1h ago
Sama clocked this way back. He has used this exact analogy - that new GPT models will feel like incremental new iPhone releases c.f. the first iPhone/GPT-3.