frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•7m ago•0 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•7m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
1•rolph•10m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•10m ago•0 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•12m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•14m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•15m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•16m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•17m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•20m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•23m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
3•cratermoon•24m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•25m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•25m ago•1 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•28m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

1•vampiregrey•30m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•31m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
2•hhs•33m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•34m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

3•Philpax•34m ago•0 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•38m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•40m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•42m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
3•EA-3167•42m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•44m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•46m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•47m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
3•RickJWagner•48m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•49m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
15•jbegley•49m ago•3 comments
Open in hackernews

GPT-5-reasoning alpha found in the wild

https://twitter.com/btibor91/status/1946532308896628748
66•dejavucoder•6mo ago

Comments

anonzzzies•6mo ago
Look at those people shouting this will be AGI / total disruption etc. Seems Elon managed one thing; to amass the dumbest folks together. 99.99% maga, crypto and almost markov chain quality comments.
ImHereToVote•6mo ago
Maybe this won't be. How long do you think a machine will be able to outdo any human in any given domain? I personally think it will be after they are able to rewrite their own code. You?
owebmaster•6mo ago
> I personally think it will be after they are able to rewrite their own code.

My threshold is when it can create a new Google

ImHereToVote•6mo ago
Why not putting it earlier than that. Why not starting and running it's own LLC. I would think when that LLC is bigger than Google it might already be obvious.
kasey_junk•6mo ago
They write their own code now so how long will it be?
ImHereToVote•6mo ago
They "can" write parts of it. But they can't rewire the weights. They are learned not coded.
Fade_Dance•6mo ago
Seems like this will be one of the areas that will improve with multi-agentic AI, where groups of agents can operate via consensus, check/test outputs, manage from a higher meta level, etc. Not that any of that would be "magic" but the advantages of expanding laterally to that approach seem fairly obvious when it comes to software development.

So in my eyes actually think it's probably more to do with reducing the cost of AI inference by another order of magnitude, at least when it comes to mass market tools. Existing basic code-generation tools from a single AI are already fairly expensive to run compute wise.

elif•6mo ago
Your contrary certainty has the same humorously over-confident tone.
perching_aix•6mo ago
Which is how and why these political strategies work so well.
AaronAPU•6mo ago
It’s incredible how few people can see their own reflection.
thm•6mo ago
99% of AI influencers are the same people who emailed you pictures as a Word attachment a year ago.
torginus•6mo ago
This is what put me off Claude Code. When I wanted. To dig in, I tried to watch a few Youtube videos to see an expert's opinion it, and 90% people who talk about it feel like former crypto shills, who, from their channel history, seem like have never written a single line of code without AI in their lives.
plemer•6mo ago
I get it, but have you reviewed high-quality sources or actually tried the product?

Association fallacy: “You know who else was a vegetarian? Hitler.”

haneul•6mo ago
As someone who doesn't keep track of the influencer scene at the moment because I am way addicted to building...

You should totally give Claude Code a try. The biggest problem is that it is glaze-optimized, so have to work at getting it to not treat you like the biggest genius of all time. But when you manage to get in a good flow with it, and your project is very predictably searchable, results start to be quite helpful, even if just to unstuck yourself when you're in a rut.

reactordev•6mo ago
This. Claude Code was the only one to be able to grok my 20 year old C++ codebase so that I could update things deep in it's bowels to make it compile because I neglected it on a thumb drive for 15 years. I had no mental model of what was going on. Claude built one in a few minutes.
torginus•6mo ago
I will try it. I did use Cursor agents beforehand (using Sonnet/Opus 4), and my problems were that it was slower than I was (meaning me prompting AI), and was not good enough to leave it unattended.
jug•6mo ago
It annoys me to experience the huge discrepancy between content on social media on AI versus actual enterprise use. AI is happening, it's absolutely becoming integral parts of many businesses including our own. But these guys are just doodling in MS Paint and they're flooding the channels.
reactordev•6mo ago
Enterprises are in the same situation as you are. Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.

I can count on my hands the number of enterprises that actually have AI models of their own.

bdangubic•6mo ago
just curious, why does an enterprise have to have their own model? company can use ____ (someone else’s model) and still accomplish amazing AI shit in their products
reactordev•6mo ago
Because data protection and privacy compliance hasn’t caught up yet.
jgalt212•6mo ago
or given up under the unceasing pressure from the AI madness.
garciasn•6mo ago
Because I am not permitted to share my code nor client information/data with unapproved third parties; it’s a contractual obligation. So; we train our own models to do those things.

I use Claude Code for building products that don’t have these limitations. And fuck is it amazing. Even little things that would have taken days are done in a single line of text.

rvz•6mo ago
> Many of them are posting marketing about AI without actually having AI. They are using OpenAI API's to say they have AI.

And somehow these companies are now "AI companies", just like in the 2010s your average food market down the street was a "tech company" or the bakery next to it is now a "blockchain company". This happens all the time with bubbles and mania.

These enterprises today appear even more confused about what they do to rebrand themselves and it's a sign they are desperate for survival.

anonzzzies•6mo ago
Claude code is good though: no need to watch influencers for that. Or ever.
sorokod•6mo ago
Golgafrincham Ark B, material.

https://hitchhikers.fandom.com/wiki/Golgafrincham

threatripper•6mo ago
We have to wait and test it ourselves to see how far it gets in our daily tasks. If the improvement continues like it did in the past, that would be pretty far. Not quite a full researcher position but an average student assistant for sure.
shiandow•6mo ago
I'll believe in AGI when OpenAI stops paying human developers.
brookst•6mo ago
I don’t see how this follows. Does AGI mean that it is free to operate and has no hardware / power constraints?

The fact that I see people being paid to dig a trench does not make me doubt the existence of trenching machines. It just means that the tool is not always the best choice for every job.

rvz•6mo ago
> Does AGI mean that it is free to operate and has no hardware / power constraints?

It is that and an autonomous system that can generate $100BN dollars in profits. (OpenAI and Microsoft's definition of AGI)

So maybe when we see a commercial airplane with no human pilots on board but an LLM piloting the plane with no intervention needed?

Would you board such a plane?

brookst•6mo ago
Extreme rhetoric weakens arguments. Would you let Albert Einstein perform open heart surgery on you? AHA! Maybe he wasn’t that smart.
graycat•6mo ago
AGI???? Again, once again, over again, yet again, one more time:

(1) Given triangle ABC, by means of Euclidean construction find point D on line AB and point E on line BC so that the lengths |AD| = |DE| = |EC.

(2) Given triangle ABC, by means of Euclidean construction inscribe a square so that each corner of the square is on a side of the triangle.

Come ON AGI, let's have some RESULTS that human general intelligence can do -- gee, I solved (1) in the 10th grade.

swat535•6mo ago
Wasn't Sam Altman claiming AGI is just a couple of years away and OpenAI is at the forefront of it?
ogogmad•6mo ago
In related news, OpenAI and Google have announced that their latest non-public models have received Gold in the International Mathematics Olympiad: https://news.ycombinator.com/item?id=44614872

That said, the public models don't even get bronze.

[EDIT] Dupe of this: https://news.ycombinator.com/item?id=44614872

johnecheck•6mo ago
Wow. That's an impressive result, though we definitely need some more details on how it was achieved.

What techniques were used? He references scaling up test-time compute, so I have to assume they threw a boatload of money at this. I've heard talk of running models in parallel and comparing results - if OpenAI ran this 10000 times in parallel and cherry-picked the best one, this is a lot less exciting.

If this is legit, then I really want to know what tools were used and how the model used them.

badgersnake•6mo ago
> If this is legit

Indeed.

chvid•6mo ago
What is this? Guerilla marketing from a 300B startup?
bgwalter•6mo ago
None of the X enthusiasts has even seen a benchmark or used the thing, but we're glad to know that Duke Nukem Forever will be released soon.
mjburgess•6mo ago
It's strange that none of these $100s bn+ companies fund empirical research into the effects of AI tools on actual job roles as part of their "benchmarks". Oh wait, no its not.
brookst•6mo ago
Agree, it would be bizarre if they did.
mjburgess•6mo ago
It would be bizarre if they benchmarked the models based on actual task performance?
brookst•6mo ago
Let’s not move the goalposts. It would be bizarre if they got into a sociological study of impact on job roles.
spuz•6mo ago
Not sure why you think that? One of their biggest sales pitches to businesses is the potential for their products to replace certain job roles in the future. Why wouldn't they be actively doing research on real-world usage and impact right now?
bawana•6mo ago
Well i asked chatGPT IF i could run kimik2 on a 5800x 3d with 64 gigs of ram with a 3090 and it said:

Yes, you absolutely can run Kimi-K2-Instruct on a PC with:

:white_check_mark: CPU: AMD Ryzen 7 5800X3D :white_check_mark: GPU: NVIDIA RTX 3090 (24 GB VRAM) :white_check_mark: RAM: 64 GB system memory This is more than sufficient for both:

Loading and running the full Kimi-K2-Instruct model in FP16 or INT8, and Quantizing it with weight-only INT8 using Hugging Face Optimum + bitsandbytes.

Kimi k2 has a trillion parameters and even an 8 bit quant would need half a gig of system ram +vram

This is with the free chatGPT that us peasants use. I dont have the means to run grok4 heavy, deep seek or kimi k2 to ask them.

I cant wait to see what accidental wars will start when we put ai in the kill chain

ogogmad•6mo ago
Maybe you should use a reasoning model. Got this from O3, which took 1m31s to think about the answer: https://chatgpt.com/s/t_687b9221fb748191af4e30f597f18443

Bottom line: Your 5800X3D + 64 GB RAM + RTX 3090 will run Kimi K2’s 1.8‑bit build, but response times feel more like a leisurely typewriter than a snappy chatbot. If you want comfortable day‑to‑day use, plan either a RAM upgrade or a second (or bigger) GPU—or just hit the Moonshot API and save some waiting.

threatripper•6mo ago
I second this. o3 is pretty spot on while 4o answered exactly like what the parent got.

I rarely use 4o anymore for anything. Rather would I wait for o3 than quickly get a pile of rubbish.

brookst•6mo ago
4o is great for simple lookup and compute tasks; stuff like “scale this recipe to feed 12” or “what US wineries survived prohibition”.

o3 all the way for anything needing analysis or creative thought.

jug•6mo ago
These cases are probably why OpenAI has stated GPT-4.1 is their last non reasoning model and GPT-5 will determine the need for and how much to reason based on the query.
dyl000•6mo ago
cant wait for how mid this is going to be.
m3kw9•6mo ago
Yeah this is as big of a news as iPhone 18 in the pipeline.
pjs_•6mo ago
Sama clocked this way back. He has used this exact analogy - that new GPT models will feel like incremental new iPhone releases c.f. the first iPhone/GPT-3.
lucisferre•6mo ago
Seems a bit early to use that analogy though. Early iPhones upgrades generally had significant improvements in almost all specs.
tim333•6mo ago
>The declaration of AGI ... will force Microsoft to relinquish its rights to OpenAI revenue ...

That's an interesting business arrangement. There must be an incentive for OpenAI to get declaring?