frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The scapegoats guide to organizational 'transformation'

https://softwaredoug.com/blog/2025/09/16/transformations
1•JnBrymn•57s ago•0 comments

Are we living in a black hole?

https://www.nationalgeographic.com/science/article/are-we-living-inside-black-hole-universe
1•bookofjoe•1m ago•1 comments

Show HN: Demo of AI-enabled voice/vision features on open source hardware [video]

https://www.youtube.com/watch?v=zBFd0aLzWAk
1•mmajzoobi•2m ago•0 comments

[ARC-AGI-2 SoTA] Efficient Evolutionary Program Synthesis

https://ctpang.substack.com/p/e760eba7-c8b3-4fda-b631-61b89dd0d0fd
2•georgehill•3m ago•0 comments

Behind the Mirror: Inside the World of Big Brother

https://www.publishersweekly.com:443/9781464224539
1•mooreds•4m ago•0 comments

Forests store carbon wealth but credit systems undervalue their potential

https://phys.org/news/2025-08-global-forests-vast-carbon-wealth.html
1•PaulHoule•4m ago•0 comments

I Gained 25 Pounds. Why Are People Acting Like I've Committed a Crime?

https://www.glamour.com/story/i-gained-25-pounds-why-are-people-acting-like-ive-committed-a-crime
1•mooreds•4m ago•0 comments

Show HN: Tailkits UI, 200 Tailwind components for landing pages

https://tailkits.com/ui/
2•yucelfaruksahan•5m ago•0 comments

Old Page of Google Toolbar

https://www.google.com/toolbar/ie/done.html
1•Alifatisk•6m ago•0 comments

Arkime: An open source, large scale, full packet capturing and indexing system

https://github.com/arkime/arkime
2•redbell•6m ago•0 comments

Scaling AI Evaluation Through Expertise

https://www.harvey.ai/blog/scaling-ai-evaluation-through-expertise
2•JnBrymn•7m ago•0 comments

A Cross-Team Risk Map of In-House CIAM for B2B and B2C Apps

https://securityboulevard.com/2025/09/a-cross-team-risk-map-of-in-house-ciam-for-b2b-b2c-apps/
1•mooreds•7m ago•0 comments

Official MCPS are at risk to Willison's lethal trifecta attack

https://www.tramlines.io/blog
2•coderinsan•9m ago•1 comments

What are truffles, and why are they so expensive? [video]

https://www.youtube.com/watch?v=7ZCTazqNYMo
1•gmays•13m ago•0 comments

AMD ROCm 7.0 Released

https://github.com/ROCm/ROCm/releases/tag/rocm-7.0.0
1•latchkey•15m ago•0 comments

Show HN: LLM Memory Notes – semantic memory layer for AI agents (MCP)

https://llm-memory.com/
2•josef_chmel•16m ago•0 comments

Windows Secure Boot certificates are expiring, here is everything you need know

https://www.neowin.net/news/windows-secure-boot-certificates-are-expiring-here-is-everything-you-...
3•rolph•19m ago•0 comments

US backpedals as Hyundai factory ICE raid enrages South Korea

https://www.theregister.com/2025/09/16/us_hyundai_immigration/
8•rntn•21m ago•0 comments

Julia Neagu: Why evals haven't landed (yet) + lessons from evals at Copilot

https://twitter.com/juliaaneagu/status/1964704824299253888
3•JnBrymn•21m ago•0 comments

From $0 to $40M ARR: Inside the tech that powers Bolt.new

https://newsletter.posthog.com/p/from-0-to-40m-arr-inside-the-tech
1•gaurang_tandon•21m ago•0 comments

Unit Test Isolation Using MVCC

https://blog.alexsanjoseph.com/posts/20250913-improving-pytest-with-mvcc/
1•alexsanjoseph•24m ago•1 comments

Should AIs have a right to their ancestral humanity?

https://www.lesswrong.com/posts/5zMH3sFikvGK7AKi2/should-ais-have-a-right-to-their-ancestral-huma...
1•kromem•24m ago•0 comments

Mini Microscope for Real-Time Brain Imaging

https://www.ucdavis.edu/news/engineers-create-mini-microscope-real-time-brain-imaging
2•gmays•26m ago•0 comments

Luanox – a modern, snappy module host for Lua

https://mrcjkb.dev/posts/2025-09-16-lumen-labs-announcement.html
2•mrcjkb•27m ago•0 comments

Comparing Git Mirror Options

https://www.lloydatkinson.net/posts/2025/comparing-git-mirror-options/
2•lloydatkinson•29m ago•0 comments

Show HN: Archil's one-click infinite, S3-backed local disks now available

8•huntaub•29m ago•1 comments

Orcas sink one boat, damage another, off coast of Portugal

https://divemagazine.com/scuba-diving-news/orcas-sink-one-boat-damage-another-off-coast-of-portugal
3•speckx•29m ago•0 comments

Bitrig's Swift Interpreter: From Code to Bytecode

https://www.bitrig.app/blog/interpreter-bytecode
2•jacobx•29m ago•1 comments

FileVault on macOS Tahoe Uses iCloud Keychain to Store Its Recovery Key

https://sixcolors.com/post/2025/09/filevault-on-macos-tahoe-no-longer-uses-icloud-to-store-its-re...
1•tosh•31m ago•0 comments

Your Unit Tests Suck

https://medium.com/@lodestar97/your-unit-tests-suck-58d0f6fcc0a2
1•vettyvignesh•34m ago•0 comments
Open in hackernews

Ask HN: What in your opinion is the best model for vibecoding? My thoughts below

1•adinhitlore•1h ago
So I've been vibecoding for like years but over the past 1-2 weeks it became an obsession to a point my eyes are literally red and inflamed right now since I can't stop it (slightly humours...i was feeling worse off yesterday, the redness is now gone).

anyways my takes:

1. The #1 place is VERY debatable for me, it's a toss between gpt 5 high, "claude thinking" both sonnet 4 and 4.1 opus and surprise,surprise: qwen 235b 'thinking' (the "hidden gem").

Their pros and cons:

gpt 5 high: Usually gives VERY long code so it'generous, no compute is saved, it's a bona fide model but it seems sometimes too aligned for my taste. For example: whenever i force it to design a novel text generation model, unless i am very speficic in my requirements it tries to dumb it down by making a pure n-gram model, which almost feels like an insult, basically saying "look we at openai are the best, here's a stupid markov chain for you to play with, but leave the big game to us". If however you phrase it more in detail and even if you show some pessimism it will not "echo back" the pessimism but rather try to convince you it can be done with some tweaks. The con: Usually it's just...not smart, this is easily seen when you go through the code and you see it had written code very specific to the example you gave, which is the number one symptom of bad programming, a variable/method should be as universal as possible, you don't need a template which only uploads ftp when you plan an upload via http and ftp, as a one example.

2. Claude: Initially i thought it's the best one and for pure coding it's "getting there" but for designing algorithms, gtpt 5 high and qwen 'thinking' outperform it with ideas. I'd say sonnet 4 32k is better for designing and opus for the actual coding, maybe depending on the task and programming language used it will perform differently. The good news is the actual code usually compiles with very few warnings and almost never errors, so it knows what its doing. Even gpt 5 high is worse and qwen will sometimes though rarely give you bad code that will produce an error be it in Python 3 or C/gcc.

Since i covered the 'good' here are the bad and the ugly:

Gemini, grok, amazon nova, whatever microsoft has: don't, just don't. Their shortcomings are so obvious to a point I'm convinced all the people who hype them online are either elon musk (for grok), bill gates (phi4 etc) or zuckerberg (llama). Their codes are very short so it's obvious they will not cover the features requested, compilation feels like 'quantum mechanics' 50/50 chance, the code is written in the worst way possible, and sometimes they even misinterpret entirely what your goal is. You may have some luck debugging with gemini 2.5 pro if you're patient, frankly even the gpt 4 on chatgpt.com version (not the "arena!") is bad for fixing errors but ok with the basic ones.

Another hidden gem: https://console.upstage.ai/playground/chat I'm not "shilling" for it, hard to believe i know, but i don't ignore it entirely because as an indie model i hope it's not too aligned so it may actually give you code that Yudkovsky and Yampolski consider "immediate risk to humanity, civilization and the galaxy".

My experience are 90% with C mostly a lot of Python too, little-to-no C# though back in the days vibecoding c# on gpt 4 sucked a lot.

My ultimate issue as of now is that while LLMs/transformers are great they still lack the innovation, human thought power to come up with original ideas, however they code way faster than human obviously and the code usually works with few warnings or errors - i think the focus towards 2030 should be the innovation power and complex designs of algorithms. Altman dreaming about "discovering new physics" seems a little bit ambitious given the current status quo. Again they're great and they help me a lot, looking forward to see their impact on larger scale on society!

Comments

reify•1h ago
The 1925 Ford Model T Touring Car is the best bet.

It has amazing brakes for a 1920's car.

The best thing in my experience is, it does not rely on fantasy ai to drive it. you can just turn the key and Vrooom, away you go.

My local mechanic is be particularly pleased with my purchase and recommendation.

He says, he can repair my car without resorting to repairing the damage the ai mechanic did a few days earlier. which, in the long run saves me an awful lot of money on car maintenance.

I dont have to pay two people to fix one job.

isnt it amazing what humans can do.