frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
1•todsacerdoti•32s ago•0 comments

Feedback on a client-side, privacy-first PDF editor I built

https://pdffreeeditor.com/
1•Maaz-Sohail•4m ago•0 comments

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•11m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
4•quentin101010•16m ago•1 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•18m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•20m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

2•haileyzhou•20m ago•0 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•21m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•22m ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•24m ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
1•gtsnexp•26m ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•28m ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
2•xipz•30m ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•34m ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•35m ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
2•birdculture•37m ago•0 comments

Projecting high-dimensional tensor/matrix/vect GPT–>ML

https://github.com/tambetvali/LaegnaAIHDvisualization
1•tvali•38m ago•1 comments

Show HN: Free Bank Statement Analyzer to Find Spending Leaks and Save Money

https://www.whereismymoneygo.com/
2•raleobob•42m ago•1 comments

Our Stolen Light

https://ayushgundawar.me/posts/html/our_stolen_light.html
2•gundawar•42m ago•0 comments

Matchlock: Linux-based sandboxing for AI agents

https://github.com/jingkaihe/matchlock
2•jingkai_he•45m ago•0 comments

Show HN: A2A Protocol – Infrastructure for an Agent-to-Agent Economy

2•swimmingkiim•49m ago•1 comments

Drinking More Water Can Boost Your Energy

https://www.verywellhealth.com/can-drinking-water-boost-energy-11891522
1•wjb3•52m ago•0 comments

Proving Laderman's 3x3 Matrix Multiplication Is Locally Optimal via SMT Solvers

https://zenodo.org/records/18514533
1•DarenWatson•54m ago•0 comments

Fire may have altered human DNA

https://www.popsci.com/science/fire-alter-human-dna/
4•wjb3•55m ago•2 comments

"Compiled" Specs

https://deepclause.substack.com/p/compiled-specs
1•schmuhblaster•1h ago•0 comments

The Next Big Language (2007) by Steve Yegge

https://steve-yegge.blogspot.com/2007/02/next-big-language.html?2026
1•cryptoz•1h ago•0 comments

Open-Weight Models Are Getting Serious: GLM 4.7 vs. MiniMax M2.1

https://blog.kilo.ai/p/open-weight-models-are-getting-serious
4•ms7892•1h ago•0 comments

Using AI for Code Reviews: What Works, What Doesn't, and Why

https://entelligence.ai/blogs/entelligence-ai-in-cli
3•Arindam1729•1h ago•0 comments

Show HN: Solnix – an early-stage experimental programming language

https://www.solnix-lang.org/
4•maheshbhatiya•1h ago•0 comments

DoNotNotify is now Open Source

https://donotnotify.com/opensource.html
12•awaaz•1h ago•3 comments
Open in hackernews

Writing in the Age of LLMs

https://www.sh-reya.com/blog/ai-writing/
55•hamelsmu•7mo ago

Comments

hamelsmu•7mo ago
Really good article. I was discussing this with Shreya (the author), and an interesting insight was that prompting a LLM to follow these instructions do not work reliably.

I’ve had similar frustrations. Maybe the next thing to try is fine tuning? Curious what others think.

staticman2•7mo ago
This is somehow not satirical?

"The AI told me splattering em dashes everywhere was what I want—and I, the author of this AI written blog, agrees!"

Slow_Hand•7mo ago
I don't see that sentence anywhere in the guide. Where did you find it?
staticman2•7mo ago
That wasn't a real quote. I guess my sarcasm didn't transmit to text.I found the entire article so ridiculous that for a moment I thought it was satirical.

The actual quote was this:

"Em dashes are great for inserting clarifying details, quick shifts, or sharp asides—without breaking the sentence. I love them. When used well, they add rhythm and emphasis. They help writing flow the way people actually talk."

aonsager•7mo ago
I think you missed the point. That section is asserting that people wrongly assume em dashes to be a distinguishing trait of LLM output, while in fact they are good tools for human writers (as the author demonstrates).
staticman2•7mo ago
I didn't miss the point—you missed mine.

Deciding you like em dashes— and writing a blog post saying so—because an AI told you they were good for human writers—is funny behavior—even if "you" weren't LLM output masquerading as an author.

It's very generous of you to assume a "human writer" wrote that blog post—is it not?

bytepoet•7mo ago
I really enjoyed reading this, particularly the first part where the author was specific about why we invariably (and often vaguely) find LLM generated text slightly off.

I cherish writing and find it a wonderful tool for thinking. So far, I've tried to do technical writing without much LLM help. I do run the final writing through a good model to point out factual inaccuracies.

iLemming•7mo ago
Plain language writing necessitates a good editor, just like great cooking needs a proper kitchen and equipment. Certainly, a master chef can cook something amazing on a bare campfire, using literal sticks and stone tools, yet in order to become a true chef one still needs to start in the kitchen.

I always enjoyed writing prose in Emacs, because all the tools I need are always at my fingertips - thesaurus, spellchecking, etymology lookup, dictionaries, translation, search, and these days LLMs as well.

And the level of integration some Emacs packages demonstrate is simply bananas - I can ask LLMs to help me at any point, whether I'm writing some notes, sending a Slack message to a colleague, editing a comment in a codebase or a git commit message, or even when running shell commands. You can easily manipulate the context applied to the conversation, see the payload, repeat with variability, swap models anytime, call external tools, replace things in place, examine the diff of the changes, search through your prior conversations, etc.

I honestly can barely contain my excitement at seeing how my ultimate choice gets vindicated. When I committed to Emacs while the world moved toward newer, shinier tools, it often felt isolating - like swimming against the tide. Then LLMs arrived, and for a moment I wondered if this revolutionary technology would finally render my beloved editor obsolete. Instead, the opposite happened: LLMs integrated so seamlessly into Emacs that the experience surpasses even specialized tools built exclusively for AI interaction. Years of investment weren't wasted - they were preparation for this moment of perfect synergy. The irony is beautiful: the very tool that seemed antiquated to most people keeps proving to be the most adaptable to the future.

binarysneaker•7mo ago
Great post, thank you. You articulated my feeling with ChatGPT models, and one of the reasons I prefer Claude to ChatGPT. I find Claude tries to please less. Anyway, I turned your blog post in a Cursor rule, and told ChatGPT to follow it too, and so far the output is much improved IMO. If you want to try it, the rule is here: https://github.com/davenicoll/cursor-rules/blob/main/.cursor...