frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•3m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
1•y1n0•4m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
1•calebhwin•5m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•10m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•17m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•24m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•24m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
1•rolph•27m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•27m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•29m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•31m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•32m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•33m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•34m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•37m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•40m ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
4•cratermoon•41m ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•42m ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•42m ago•1 comments

An AI model that can read and diagnose a brain MRI in seconds

https://www.michiganmedicine.org/health-lab/ai-model-can-read-and-diagnose-brain-mri-seconds
2•hhs•45m ago•0 comments

Dev with 5 of experience switched to Rails, what should I be careful about?

2•vampiregrey•47m ago•0 comments

AlphaFace: High Fidelity and Real-Time Face Swapper Robust to Facial Pose

https://arxiv.org/abs/2601.16429
1•PaulHoule•48m ago•0 comments

Scientists discover “levitating” time crystals that you can hold in your hand

https://www.nyu.edu/about/news-publications/news/2026/february/scientists-discover--levitating--t...
2•hhs•50m ago•0 comments

Rammstein – Deutschland (C64 Cover, Real SID, 8-bit – 2019) [video]

https://www.youtube.com/watch?v=3VReIuv1GFo
1•erickhill•51m ago•0 comments

Tell HN: Yet Another Round of Zendesk Spam

5•Philpax•51m ago•1 comments

Postgres Message Queue (PGMQ)

https://github.com/pgmq/pgmq
1•Lwrless•55m ago•0 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•57m ago•1 comments

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
2•geox•59m ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
3•EA-3167•59m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
6•fliellerjulian•1h ago•0 comments
Open in hackernews

Lessons Learned Writing a Book Collaboratively with LLMs

16•scottfalconer•9mo ago
(Note: I'm not linking the resulting book. This post focuses solely on the process and practical lessons learned collaborating with LLMs on a large writing project.)

Hey HN, I recently finished a months-long project collaborating intensively with various LLMs (ChatGPT, Claude, Gemini) to write a book about using AI in management. The process became a meta-experiment, revealing practical workflows and pitfalls that felt worth sharing.

This post breaks down the workflow, quirks, and lessons learned.

Getting Started: Used ChatGPT as a sounding board for messy notes. One morning, stuck in traffic, tried voice dictation directly into the chat app. Expected chaos, got usable (if rambling) text. Lesson 1: Capture raw ideas immediately. Use voice/text to get sparks down, then refine. Key for overcoming the blank page.

My Workflow evolved organically: Conversational Brainstorming: "Talk" ideas through with the AI. Ask for analogies, counterarguments, structure. Treat it like an always-available (but weird) partner. Partnership Drafting: Let AI generate first passes when stuck ("Explain X simply for Y"), but treat as raw material needing heavy human editing/fact-checking. Or, write first, have AI polish. Often alternated. Iterative Refinement: The core loop. Paste draft > ask for specific feedback ("Is this logic clear?") -> integrate selectively > repeat. (Lesson 2: Vague prompts = vague results; give granular instructions. Often requires breaking down tasks: logic first, then style). Practice Safe Context Management: LLMs forget (context windows). (Lesson 3: You are the AI's external memory. Constantly re-paste context/style guides; use system prompts. Assume zero persistence across time). Read-Aloud Reviews: Use TTS or read drafts aloud. (Lesson 4: Ears catch awkwardness eyes miss. Crucial for natural flow).

The "AI A-Team": Different models have distinct strengths: ChatGPT: Creative "liberal arts" type; great for analogies/prose, but verbose/flattery-prone. Claude: Analytical "engineer"; excels at logic/accuracy/code, but maybe don't invite for drinks. Gemini: The "copyeditor"; good for large-context consistency. Can push back constructively. (Lessons 5 & 6: Use the right tool for the job; learn strengths via experimentation & use models to check each other. Feeding output between them often revealed flaws - Gemini calling out ChatGPT's tells was useful).

Stuff I Did Not Do Well:

Biggest hurdles:

AI Flattery is Real: Helpfulness optimization means praise for bad work. (Lesson 7: Prompt for critical feedback. 'Critique harshly'. Don't trust praise; human review vital). The "AI Voice" is Pervasive: Understand why it sounds robotic (training bias, RLHF). (Lesson 8: Combat AI-isms. Prompt specific tones; edit out filler/hedging/repetition/'delve'; kill em dashes unless formal). Verification Burden is HUGE: AI hallucinates/facts wrong. (Lesson 9: Assume nothing correct without verification. You are the fact-checker. Non-negotiable despite workload. Ground claims; be careful with nuance/lived experience). Perfectionism is a Trap: AI enables endless iteration. (Lesson 10: Set limits; trust judgment. Know 'good enough'. Don't let AI erode voice. Kill your darlings).

My Personal Role in This fiasco:

Deep AI collaboration elevates the human role to: Manager (goals/context), Arbitrator (evaluating conflicts), Integrator (synthesizing), Quality Control (verification/ethics), and Voice (infusing personality/nuance).

Conclusion: This wasn't push-button magic; it was intensive, iterative partnership needing constant human guidance, judgment, and effort. It accelerated things dramatically and sparked ideas, but final quality depended entirely on active human management.

Key takeaway: Embrace the mess. Capture fast. Iterate hard. Know your tools. Verify everything. Never abdicate your role as the human mind in charge. Would love to hear thoughts on others' experiences.

Comments

robotbikes•9mo ago
Nice. I leverage the strengths of AI in a way that affirms the human element in the collaboration. AI as it exists in LLMs is a powerful source of potentially meaningful language but at this point LLMs don't have a consistent conscious mind that exists over time like humans do. So it's more like summoning a djinn to perform some task and then it disappears back into the ether. We of course can interweave these disparate tasks into a meaningful structure and it sounds like you have some good strategies for how to do this.

I have found that using an LLM to critique your writing is a helpful way of getting free generic but specific feedback. I find this route more interesting than the copy pasta AI voiced stuff. Suggesting that AI embodys a specific type of character such as a pirate can make the answers more interesting than just finding the median answer, add some flavor to the white bread.

scottfalconer•9mo ago
One of the things I found helpful about getting out of the specific / formulaic feedback was asking the LLM to ask me questions. At one point I asked a fresh LLM to read the book and then ask me questions. It showed me where there were narrative gaps / confusing elements that a reader would run into, but didn't realy on the specific "answer" from the LLM itself.

I also had a bunch of personal stories interwoven in and it told me I was being "indulgent" which was harsh but ultimately accurate.

vunderba•9mo ago
That's a great approach. I find LLMs work really well as Socratic sounding boards and can lead you as the writer to explore avenues you might have otherwise not even noticed.
lnwlebjel•9mo ago
Given that humans are 'wired for story', perhaps you should consider indulging. These could be what makes the books stand out after all.
scottfalconer•9mo ago
In the end there are plenty of stories, but they're ones that are relevant. The story that the LLM gave feedback on was about flipping a raft on the Grand Canyon, the LLM's advice was that it felt unrelated to the point I was trying to make. That made me realize I had it in there more because I wanted to talk about the rafting Grand Canyon, vs. it being useful and entertaining to readers.
lnwlebjel•9mo ago
Thanks for posting this, it's a very interesting case study. Considering that the thing they seem to excel at is this type of writing, it's interesting that they still seem to be only ok at it if you're trying to produce a serious, genuinely useful output. This fits with my experience, though yours is much more extensive and thorough. In particular I fully concur with the voice/tone, and the need to verify everything (always the case anyway), and "Never abdicate your role as the human mind in charge" -- sometimes the suggestions it makes are just not that good.

Question is, do you think this process was faster using the various LLMs? Could two (or N) sufficiently motivated people produce the same thing in the same time? (and if so, what is N). I'm wondering if the caveats and limitations end up costing as much time as they save. Maybe you're 2x faster, if so that would be significant and good to know.

In the abstract, this is similar to my experience with AI produced code. Except for very simple, contained code, you ultimately, need to read and understand it well enough to make sure that it's doing all the things that you want and not producing bugs. I'm not sure this saves me much time.

scottfalconer•9mo ago
I think it was faster in that I would have never written the book without the LLMs. Essentially they unlocked the swirl of thoughts and notes that lived somewhere between my head, TextEdit, emails to myself, and anywhere else I stashed things.

It's like it unblocked the "hard part" (getting the words into a coherent form for others), while letting me focus on the "value parts" (my unique perspective / ideas).

It might not be that overall it saved me time, but it made it a hell of a lot more fun, so in the end I completed it - and maybe AI helping us see things through to completion is where we'll see a big unblock in human potential.

yorkyarn•9mo ago
very useful case study