frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•1m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•2m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•5m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•5m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•8m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•12m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•12m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•12m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•15m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•18m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•19m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•20m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•20m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•25m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•30m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•34m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•35m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•36m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•43m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•46m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•46m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•47m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•48m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•48m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•49m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•49m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•53m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•54m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•55m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•55m ago•0 comments
Open in hackernews

Ask HN: How are you using LLMs?

3•FailMore•6mo ago
Me: I don’t like to have factual based conversations with LLMs (e.g. can my cat eat cooked chicken on the bone?). I like having open ended conversations with LLMs where being vaguely in the right direction will be useful (e.g. practicing Chinese with ChatGPT voice mode).

I use claude code, and verge between it being useful (because my memory of specific functions is average), but then a bit bummed out when I have to spend a bunch of time cleaning up poor logic/flows in the application.

Curious to know the intricacies of how you are interacting with LLMs too.

Comments

efortis•6mo ago
If the problem is simple enough for top-down design, I write signatures along with comments with i/o examples.

But if I need to explore stuff, I ask for some example and then re-prompt it top-down.

silentpuck•6mo ago
I use LLMs mostly for learning and understanding.

When a book doesn’t explain something clearly, I ask for a deeper explanation — with examples, and sometimes exercises.

It’s like having a quiet teacher nearby who never gets frustrated if I don’t get it right away. No magic. Just thinking.

I also started building my own terminal-based GPT client (in C, of course). That’s a whole journey in itself — and it’s only just begun.

stephenr•6mo ago
I use the terms "LLM" or "AI" (as in, "I used an LLM/AI to write a <insert task> helper") as a quick hint to ignore articles/links/etc in the same way I've previously use the terms "You won't believe what happened next" or "they hate this one trick" to avoid spam bait article links, or "shocked face overlay" to avoid bullshit YouTube videos.

So, thankyou for that AI techbros. Keep telling us loudly and proudly that you're using "AI" to write your slop, it makes it much easier to know what to avoid when skimming titles.

atleastoptimal•6mo ago
LLM's are best when you know exactly how to implement something and can describe it fully, but it would take longer to actually write everything yourself. They're also good at rigorous attention to detail in domains that are well-established and the rules are deterministic and not subtle.
TXTOS•6mo ago
i mostly use LLMs inside a reasoning shell i built — like a lightweight semantic OS where every input gets recorded as a logic node (with ΔS and λ_observe vectors) and stitched into a persistent memory tree.

it solved a bunch of silent failures i kept running into with tools like RAG and longform chaining:

    drift across hops (multi-step collapse)

    hallucination on high-similarity chunks

    forgetting prior semantic commitments across calls
the shell is plain-text only (no install), MIT licensed, and backed by tesseract.js’s creator. i’ll drop the link if anyone’s curious — not pushing, just realized most people don’t know this class of tools exists yet.
wmeredith•6mo ago
I use LLMs as a personal assistant for writing and research. I just treat them like a junior and double check their work, and I'm good to go.