frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•5m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•6m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•8m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•8m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•8m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
1•pseudolus•9m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•9m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•10m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•10m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
2•obscurette•11m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•12m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•12m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•15m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•16m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•16m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•17m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•18m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•18m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•19m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
6•derriz•19m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•19m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•20m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•21m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•23m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•24m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•26m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•27m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•28m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•30m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•31m ago•0 comments
Open in hackernews

AI will happily design the wrong thing for you

https://www.antonsten.com/articles/ai-will-happily-design-the-wrong-thing-for-you/
96•zdw•4mo ago

Comments

onion2k•4mo ago
You can spot AI-generated work from a mile away because it lacks the intentional decisions that make products feel right.

You definitely can where someone has just vibe coded a thing in a weekend. When someone has actually taken a lot of care to use AI to build something well, using many iterations of small steps to create code that's basically what they'd have written themselves and to integrate good UX driven by industry-standard libraries (e.g. shadcn, daisy), then it looks pretty much exactly like any other MVP app... because that's what it is.

jcims•4mo ago
Agree generally, but in my experience AI generated code tends to have more gold-plating than hand spun code (likely due to the substantially lowered cost of generating it)

Also generated comments tend to explain how/what vs why, which is usually what I want to know.

onion2k•4mo ago
AI generated code tends to have more gold-plating than hand spun code

It does if you let the AI generate lots of code at once. If you take small steps and build iteratively telling it what to do (following a plan that the AI generated if you want to) then it doesn't.

This isn't revelatory though. It's exactly the same as a developer would do - if you give a person a vague idea about what they should make and just leave them to get on with it they'll come back with something that does things you didn't want too.

dotancohen•4mo ago
What is gold plating in this context? Checking the miriad of corner cases at the top of a function? Actually writing the doc comments? Actually writing the tests and documentation? Good git commit messages?
jotux•4mo ago
I work on a large C++ codebase for an aerospace application. The code is relatively conservative, and when we add things, we're generally conservative in our approach to add new things. Copilot (with Claude or GPT under the hood) constantly wants to use modern and complicated approaches to any new thing we add.

Need to check three bits in a byte from a peripheral and populate them to telemetry?

Claude/GPT: Let's make a base struct with a virtual map function, then template a struct to hold the register with functions to extract bits, then make unique pointers to each register, then create a vector of pointers to map all the values, etc.

You can write a very clear, and long, prompt that explains not to do any of that and just pull the bits out of the byte, but writing the prompt takes more time than just writing the code. These tools seem to always reach for, what I would call, the pristine solution as their first attempt and I think many would call that gold-plating.

antonvs•4mo ago
You can typically address that with a system prompt, so you don't have to mention your expectations every time.

If you're using one of the coding agents like Claude Code or Gemini CLI, you should have a .md file (e.g. CLAUDE.md) for your project which contains those kinds of instructions, and any other useful context for the project.

https://www.anthropic.com/engineering/claude-code-best-pract...

marxism•4mo ago
Here's an example of gold plating from a CLI I created this past weekend.

I did not like the terse errors when parsing JSON. "invalid type: boolean `true`, expected a string", line: 3, column: 24

So I asked for Elm style friendly error messages that try to give you all the information to fix things right up front.

https://github.com/PeoplesGrocers/json-archive/blob/master/s... https://github.com/PeoplesGrocers/json-archive/blob/master/s...

And then since I had time, I asked for some documentation to show/explain the various edge case handling decisions I made.

https://github.com/PeoplesGrocers/json-archive/blob/master/d...

It's gold plating because no one wants or needs my little tool. If I spent that same hour iterating on the output format to be more user friendly while at work, I would be taken out behind the woodshed and shot. It's pure wasted effort.

jcims•4mo ago
Exactly.
jcims•4mo ago
Basically the category of stuff that makes you wonder if someone had too much time on their hands.
observationist•4mo ago
You can also tell the difference between a skillfully crafted table and kit furniture from Ikea. Both may perform technically equally as well, and there are some fantastic examples where Ikea furniture has served as the base for a beautifully crafted piece of art. There's a similar phenomenon happening here - you can use AI to code things, but the craft of it still requires the thoughtful and careful application of domain specific knowledge.

AI can even get there, if guided by someone who knows what they're doing. We need more tutorials on how to guide AI, just like tutorials for photoshop used to walk amateurs through producing excellent logos, designs, and graphics. A whole generation of photoshop users learned to master the tools by following and sharing cookie cutter level instructions, then learning to generalize.

We should see the same thing happen with AI, except I think the capabilities will exceed the relevance of instructions too fast for any domain skills to matter.

If AI coding stagnates and hits a plateau for a couple years, then we'll see human skill differentiate uses of the tool in a pragmatic way, and a profusion of those types of tutorials. We're still seeing an acceleration of capabilities, though, with faster, better models with more capabilities appearing more frequently, ~every 3-4 months.

At some point there will be a practical limit to release schedules, with resource constraints on both human and compute sides, and there will be more "incremental" updates, comparable to what Grok is already doing with multiple weekly incremental updates on the backend, and 4-5 major updates throughout the year.

Heck, maybe at some point we'll have a reasonable way of calibrating these capabilities improvements and understanding what progress means relative to human skills.

Anyway - a vast majority of AI code feels very "cheap Ikea" at this point, with only a few standouts from people who already knew what they were doing.

lordnacho•4mo ago
For all the praise I give to Claude, I still use it as a fast version of what I would do myself:

- Looking at compiler errors and fixing them. Looking at program output and fixing errors.

- Looking for documentation on the internet. This used to be a skill in itself: Do I need the reference work (language spec), a stackoverflow, or an article?

- Typing out changes quickly. This goes a little bit deeper than just typing or using traditional "change all instances of this name"-tools, but its essence is that to edit a program, you often have to make a bunch of changes to different documents that preserver referential integrity.

All these things can be amazingly faster due to the agent being able to mix the three legs.

However, it doesn't save you from knowing what needs to be done. If you couldn't in principle type out the whole thing yourself, AI will not help you much. It's very good at confidently suggesting the wrong path and taking you there. It also makes bad choices that you can spot as it is writing out changes, and it's useful to just tell it "hey why'd you do that?" as it writes things. If you don't keep it in line, it veers off.

The benefit for me is the level of thinking it allows me. If I'm working on a high-level change, and I write a low-level bug, I will have to use my attention on figuring this out before coming back to my original context. The window of time during the day where I can attempt a series of low-level edits that satisfy a high-level objective is narrow. With AI, I can steer the AI when I'm doing other things. I can do it late at night, or when I'm on a call. I'm also not stuck "between save points" since I can always make AI finish off whatever it was doing.

jotux•4mo ago
>I still use it as a fast version of what I would do myself

This is how I use AI coding tools, but I've internally described it to myself as, "Use the tool to write code only when I am certain of what the expected output should be."

If there is something that needs to be done and some reasoning is required, I just do it myself.

raxxorraxor•4mo ago
Agents have gotten better but I believe improvements will be costly from now on. Still, I fondly remember the first interactions and to a degree they still hold true:

Me: Hello AI, could you implement a solution for <problem>?

AI: Of course! Here you are: plonk.

Me: Is that a good solution?

AI: Absolutely not! Don't you dare solve it like this, you should do plonk...

giancarlostoro•4mo ago
As will Stack Overflow code if you don't actually research before blindly pasting "solutions" from there. It's just a higher chance of being an issue with LLMs. Always treat an LLM like a Junior, and if you don't think you can maintain the code without the LLM, you shouldn't accept the solution. Don't cut corners for speed.
s0sa•4mo ago
I know it’s just a figure of speech in this case, but personifying AI as “happily” doing something feels wrong.
repeekad•4mo ago
Wait until you have a coworker refer to what “he” wrote when talking about LLM output

Much less countless kids right now talking to these things for hours a day on their parent’s credit card

BergAndCo•4mo ago
If you say you want to have a retro terminal where you can talk to ChatGPT, instead of telling you to use an LLM CLI in a terminal with a retro theme applied, LLMs will just build you a 500K-line terminal client from scratch uncritically. Even if you ask it to make architectural decisions, it will just riff off of your bad idea. And if you ask for critique, it will tell you bad ideas are good and good ideas are bad. We all have stories about arguing with LLMs until they tell us "Okay actually you're right, I was wrong because I think whatever my training data tells me to think."
fennecbutt•4mo ago
I mean so will your fellow humans if you don't instruct, monitor and mentor them properly.