frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Goto Considered Awesome [video]

https://www.youtube.com/watch?v=1UKVEUGEk6Y
1•linkdd•1m ago•0 comments

Show HN: I Built a Free AI LinkedIn Carousel Generator

https://carousel-ai.intellisell.ai/
1•troyethaniel•2m ago•0 comments

Implementing Auto Tiling with Just 5 Tiles

https://www.kyledunbar.dev/2026/02/05/Implementing-auto-tiling-with-just-5-tiles.html
1•todsacerdoti•3m ago•0 comments

Open Challange (Get all Universities involved

https://x.com/i/grok/share/3513b9001b8445e49e4795c93bcb1855
1•rwilliamspbgops•4m ago•0 comments

Apple Tried to Tamper Proof AirTag 2 Speakers – I Broke It [video]

https://www.youtube.com/watch?v=QLK6ixQpQsQ
2•gnabgib•6m ago•0 comments

Show HN: Vibe as a Code / VaaC – new approach to vibe coding

https://www.npmjs.com/package/@gace/vaac
1•bstrama•7m ago•0 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
3•shivamhwp•8m ago•0 comments

Toledo Derailment Rescue [video]

https://www.youtube.com/watch?v=wPHh5yHxkfU
1•samsolomon•10m ago•0 comments

War Department Cuts Ties with Harvard University

https://www.war.gov/News/News-Stories/Article/Article/4399812/war-department-cuts-ties-with-harva...
5•geox•13m ago•0 comments

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
1•yi_wang•14m ago•0 comments

A Bid-Based NFT Advertising Grid

https://bidsabillion.com/
1•chainbuilder•18m ago•1 comments

AI readability score for your documentation

https://docsalot.dev/tools/docsagent-score
1•fazkan•25m ago•0 comments

NASA Study: Non-Biologic Processes Don't Explain Mars Organics

https://science.nasa.gov/blogs/science-news/2026/02/06/nasa-study-non-biologic-processes-dont-ful...
2•bediger4000•28m ago•2 comments

I inhaled traffic fumes to find out where air pollution goes in my body

https://www.bbc.com/news/articles/c74w48d8epgo
2•dabinat•29m ago•0 comments

X said it would give $1M to a user who had previously shared racist posts

https://www.nbcnews.com/tech/internet/x-pays-1-million-prize-creator-history-racist-posts-rcna257768
4•doener•31m ago•1 comments

155M US land parcel boundaries

https://www.kaggle.com/datasets/landrecordsus/us-parcel-layer
2•tjwebbnorfolk•36m ago•0 comments

Private Inference

https://confer.to/blog/2026/01/private-inference/
2•jbegley•39m ago•1 comments

Font Rendering from First Principles

https://mccloskeybr.com/articles/font_rendering.html
1•krapp•42m ago•0 comments

Show HN: Seedance 2.0 AI video generator for creators and ecommerce

https://seedance-2.net
1•dallen97•46m ago•0 comments

Wally: A fun, reliable voice assistant in the shape of a penguin

https://github.com/JLW-7/Wally
2•PaulHoule•48m ago•0 comments

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
2•y1n0•49m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
2•tolerance•49m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•50m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
2•linkdd•51m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•55m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•57m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•1h ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•1h ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
2•y1n0•1h ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
5•bundie•1h ago•1 comments
Open in hackernews

Ask HN: How are you using LLMs?

3•FailMore•6mo ago
Me: I don’t like to have factual based conversations with LLMs (e.g. can my cat eat cooked chicken on the bone?). I like having open ended conversations with LLMs where being vaguely in the right direction will be useful (e.g. practicing Chinese with ChatGPT voice mode).

I use claude code, and verge between it being useful (because my memory of specific functions is average), but then a bit bummed out when I have to spend a bunch of time cleaning up poor logic/flows in the application.

Curious to know the intricacies of how you are interacting with LLMs too.

Comments

efortis•6mo ago
If the problem is simple enough for top-down design, I write signatures along with comments with i/o examples.

But if I need to explore stuff, I ask for some example and then re-prompt it top-down.

silentpuck•6mo ago
I use LLMs mostly for learning and understanding.

When a book doesn’t explain something clearly, I ask for a deeper explanation — with examples, and sometimes exercises.

It’s like having a quiet teacher nearby who never gets frustrated if I don’t get it right away. No magic. Just thinking.

I also started building my own terminal-based GPT client (in C, of course). That’s a whole journey in itself — and it’s only just begun.

stephenr•6mo ago
I use the terms "LLM" or "AI" (as in, "I used an LLM/AI to write a <insert task> helper") as a quick hint to ignore articles/links/etc in the same way I've previously use the terms "You won't believe what happened next" or "they hate this one trick" to avoid spam bait article links, or "shocked face overlay" to avoid bullshit YouTube videos.

So, thankyou for that AI techbros. Keep telling us loudly and proudly that you're using "AI" to write your slop, it makes it much easier to know what to avoid when skimming titles.

atleastoptimal•6mo ago
LLM's are best when you know exactly how to implement something and can describe it fully, but it would take longer to actually write everything yourself. They're also good at rigorous attention to detail in domains that are well-established and the rules are deterministic and not subtle.
TXTOS•6mo ago
i mostly use LLMs inside a reasoning shell i built — like a lightweight semantic OS where every input gets recorded as a logic node (with ΔS and λ_observe vectors) and stitched into a persistent memory tree.

it solved a bunch of silent failures i kept running into with tools like RAG and longform chaining:

    drift across hops (multi-step collapse)

    hallucination on high-similarity chunks

    forgetting prior semantic commitments across calls
the shell is plain-text only (no install), MIT licensed, and backed by tesseract.js’s creator. i’ll drop the link if anyone’s curious — not pushing, just realized most people don’t know this class of tools exists yet.
wmeredith•6mo ago
I use LLMs as a personal assistant for writing and research. I just treat them like a junior and double check their work, and I'm good to go.