frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How to build a personal webpage from scratch

https://rutar.org/writing/how-to-build-a-personal-webpage-from-scratch/
1•fanf2•18s ago•0 comments

When Accuracy Meets Parallelism in Diffusion Language Models

http://66.42.62.31:1313/blogs/text-diffusion/
1•snyhlxde•1m ago•1 comments

Show HN: Bring screencasts into your editor with CodeMic

https://CodeMic.io/#hn
1•seansh•1m ago•1 comments

Flock cameras remained active after officials asked to be turned off

https://therecord.media/flock-safety-cameras-remained-active-after-cities-asked-turned-off
2•ghouse•2m ago•1 comments

ToGo – Python bindings for TG (Fast point-in-polygon)

https://github.com/mindflayer/togo
1•mindflayer•4m ago•1 comments

Anthropic donates MCP to the Linux Foundation for open and accessible AI

https://aaif.io/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation-aaif-...
1•Santosh83•4m ago•0 comments

Simple teflon coating boosts hydrogen production efficiency by 40%

https://techxplore.com/news/2025-12-simple-teflon-coating-boosts-hydrogen.html
1•geox•6m ago•0 comments

Light-bending ferroelectric controls blue and UV could transform chipmaking

https://phys.org/news/2025-12-material-blue-ultraviolet-advanced-chipmaking.html
1•westurner•6m ago•0 comments

Anthropic's Vision Advantage Is a Lot Like Apple's from the 2010s

https://danielmiessler.com/blog/anthropics-vision-advantage
1•wavelander•6m ago•0 comments

AI Can Write Your Code. It Can't Do Your Job

https://terriblesoftware.org/2025/12/11/ai-can-write-your-code-it-cant-do-your-job/
1•speckx•7m ago•0 comments

Why GPT-5.2 is our model of choice for Augment Code Review

https://www.augmentcode.com/blog/why-gpt-5-2-is-our-model-of-choice-for-augment-code-review
7•knes•7m ago•1 comments

AI Agent Security: A curated list of tools for red teaming and defense

https://github.com/ProjectRecon/awesome-ai-agents-security
1•ProjectRecon•7m ago•1 comments

100% Local LLM. Mistral Vibe vs. Opencode. A Claude Code Alternative? [video]

https://www.youtube.com/watch?v=WKBzcpU88zo
1•grigio•8m ago•0 comments

Most used programming languages in 2025

https://devecosystem-2025.jetbrains.com
1•birdculture•10m ago•0 comments

Bionetta: Efficient Client-Side Zero-Knowledge Machine Learning Proving

https://arxiv.org/abs/2510.06784
1•badcryptobitch•12m ago•1 comments

System76 Launches Pop _OS 24.04 LTS with Cosmic Desktop

https://www.phoronix.com/news/System76-Ships-Pop-OS-24.04
2•mikece•12m ago•0 comments

Show HN: CyberCage – Security platform for AI tools and MCP servers

https://cybercage.io/
4•ziyasal•13m ago•2 comments

Show HN: Free Security audit that checks what other tools miss

https://domainoptic.com/
1•renbuilds•13m ago•0 comments

Tool UI

https://www.tool-ui.com
1•handfuloflight•14m ago•0 comments

Protocolo Flux: RBS Financiada Por Canon Cósmico (ZKP Roadmap

https://paquinobr-svg.github.io/manifiesto-flux/
1•ProtocoloFLUX•14m ago•0 comments

Rivian goes big on autonomy, with custom silicon, Lidar, and a hint at robotaxis

https://techcrunch.com/2025/12/11/rivian-goes-big-on-autonomy-with-custom-silicon-lidar-and-a-hin...
3•ryan_j_naughton•14m ago•0 comments

Comparing AI Agents to Cybersecurity Professionals in Real-World Pen Testing

https://arxiv.org/abs/2512.09882
1•littlexsparkee•16m ago•1 comments

Marco Rubio bans Calibri font at State Department for being too DEI

https://techcrunch.com/2025/12/10/marco-rubio-bans-calibri-font-at-state-department-for-being-too...
3•rbanffy•17m ago•0 comments

Hyper-Scalers Are Using CXL to Lower the Impact of DDR5 Supply Constraints

https://www.servethehome.com/hyper-scalers-are-using-cxl-to-lower-the-impact-of-ddr5-supply-const...
1•rbanffy•19m ago•0 comments

Over 10k Docker Hub images found leaking credentials, auth keys

https://www.bleepingcomputer.com/news/security/over-10-000-docker-hub-images-found-leaking-creden...
3•todsacerdoti•21m ago•0 comments

Maybe AI is a regular platform shift

https://frontierai.substack.com/p/maybe-ai-is-a-regular-platform-shift
1•cgwu•22m ago•0 comments

GovSignals is solving government procurement using Trigger.dev

https://trigger.dev/customers/govsignals-customer-story
1•semicognitive•23m ago•0 comments

Huge undersea wall dating from 5000 BC found in France

https://www.bbc.com/news/articles/crk7lg1j146o
1•neversaydie•24m ago•0 comments

Rivian Unveils Custom Silicon, R2 Lidar Roadmap, and Universal Hands Free

https://riviantrackr.com/news/rivian-unveils-custom-silicon-r2-lidar-roadmap-universal-hands-free...
19•doctoboggan•25m ago•8 comments

GPT-5.2 System Card [pdf]

https://cdn.openai.com/pdf/3a4153c8-c748-4b71-8e31-aecbde944f8d/oai_5_2_system-card.pdf
4•synthwave•27m ago•0 comments
Open in hackernews

Show HN: Varu AI – Interactively generate novel-length AI drafts

https://www.varu.us/
4•levihanlen•7mo ago

Comments

levihanlen•7mo ago
Hi HN,

I'm Levi Hanlen, the developer behind Varu AI: https://www.varu.us

Like many writers and readers, I had far more story ideas than time to write them all, and frankly, some stories I just wanted to read that didn't exist yet. I started building Varu AI to explore if AI could help draft these complex, long-form narratives in a more collaborative way.

Varu AI works scene-by-scene. It doesn't outline beforehand (apart from the scene outline right before each scene). The core idea is interactive guidance using what I call 'plot promises' – inspired by Brandon Sanderson's writing lectures on narrative structure. You define key plot points or character goals, and the AI works to fulfill them. If you don't like the direction, you can adjust these promises mid-stream to actively steer the story.

As a test case (which I wrote about recently), I used it to generate a 59,000-word sci-fi first draft in about 30 minutes of interaction. I'm also having it write me a 300,000 word novel (you can find this one on the website, called "Under Falling Banners". Currently still reading/generating it)

The biggest technical challenges are definitely maintaining long-range consistency of plot and managing the LLM context window effectively, while also being cost-effective. I've found that longer input prompts degrade the quality of the output. So I can't just stick in the last 50,000 words and call it a day (at least not yet). Currently, Varu AI uses techniques like dynamic scene summaries, but improving this is an ongoing effort.

It's still relatively early days – it's a public Beta. The output quality varies and absolutely produces a first draft requiring significant human editing and rewriting, not a finished, polished novel. But I'm actively innovating on the underlying systems, and I'm really excited for the future of Varu.

It's built with Next.js, TypeScript, Tailwind, Prisma, Postgres, and Stripe. Hosted on Vercel.

I'd love to get feedback from the HN community, especially from developers, writers, or anyone interested in the intersection of AI and creativity. Happy to answer any questions about the tech, the process, the challenges, or the philosophy behind it.

On that note, I'm particularly curious if others tackling long-context generation have found effective techniques beyond summarization that balance cost, speed, and quality well?

boznz•7mo ago
It is all about accuracy for me, as a reader, reading a book needs you to be able to forget about everything and be immersed in the world and the characters you are reading about, a simple mistake like the wrong name being used, a date being slightly wrong, or a him/her mistake can yank you out of that and really annoy you. For serious readers it does have to be 100% accuracy, 99.99% is not good enough.

A few weeks ago as a test, I copied the whole of the last draft of my last novel into two AI's and asked them to find mistakes, unanswered questions, continuity errors, plot holes etc. They found no genuine ones and lots of false ones, even after some serious prompting. The two test readers I used found five between them which would have been very embarrassing if they got through to publication.

levihanlen•7mo ago
I definitely agree.

Luckily, some things are easier for the AI to remember. I've never once come across the AI making mistakes with things like wrong character name/pronouns/etc.

The main consistency issues I've seen come from remembering the past plot, and worldbuilding.

The past plot issues come from having to store past scenes concisely. Right now, this means I can't include more than 40 of the past scene summaries. But I'm actively thinking of ways to improve this. (Note: I've found that after the input prompt surpasses a certain length, the output quality starts to degrade).

The worldbuilding issues stem from a couple things. I currently don't have any worldbuilding tracking implemented. I did in the past, but it wasn't as intuitive as storing something like character data. If I included data for locations, for example, it seemed to either not follow it at all, or it would be arbitrarily limited by it. I'll get to fixing this once I improve a few other features. In my experience, the lack of worldbuilding hasn't necessarily hindered the stories too much. Brandon Sanderson once said that out of plot, character, and setting, setting was by far the least important. I definitely agree. Though having good worldbuilding will improve the story.

kadushka•7mo ago
Which models did you try?
levihanlen•7mo ago
Currently the model lineup includes:

- Gemini 2.0 Flash. This is what I use most often, as the quality/price ratio is amazing. (Looking into 2.5 flash preview)

- Deepseek v3 03.24

- GPT 4.1 (The most expensive model by far. The rest are very cheap. About 20x more expensive that 2.0 flash for my use cases)

- GPT 4.1 mini

- Qwen 3 235B and Qwen 3 30B MoE

- Grok 3 mini

- Gemma 3 27B

kadushka•7mo ago
I haven’t tried any of these, recently I’ve been switching between Claude 3.7, Gemini 2.5 pro, and GPT-4.5, with each one producing something interesting once in a while. I usually sketch a plot (sci-fi short story: 10-20 pages), mention a few details of the environment, describe the main character (often based on myself), and mention relevant writing styles. Sometimes I describe how I want it to write the story, sometimes I don’t. I rarely interact with the story beyond the initial prompt, it’s easier to just resubmit if I don’t like something.
levihanlen•7mo ago
Those are some great models. The reason I don't use them is simply the cost (especially 4.5).

And that's a pretty cool process you use!

sabslikesobs•7mo ago
"Crown of Ash and Honor," the sample book linked on the "Start Writing" page [1], seems like a counterproductive example. With "novel-length drafts" being the main selling point for the app, this draft seems acceptable for the prose itself (it's on par with most AI writing) but doesn't really demonstrate novel-length strength.

The story begins with a boy yearning to be a knight in a war-torn fantasy world apparently without magic, then there are extra-terrestrial invaders hunting a technological artifact and he suddenly knows how to fix a shield generator by crossing wires, then it suddenly turns again into a dark fantasy story with corrupted zombie-ish warriors.

On the technical front, clicking on a link in Section 2 of the story's ToC takes me to that number in Section 2.

[1]: https://www.varu.us/books/cm9w5b2jq0001l204f2r10bnu?scene=1

levihanlen•7mo ago
I agree with your thoughts on Crown of Ash and Honor. That was made with v0.7.25, so some of the algorithms weren't as good. For example, new "plot promises" are made every 4 scenes. In the old version, the AI would gradually get more and more fantastical with these (hence why the story seemed to change genres so much). I've made it so this happens less, but it still happens a bit.

And that book was mostly me letting the AI do what it wanted with minimal guidance from me. So, I didn't edit many of the plot promises.

I'll definitely switch the sample book to "Under Fallen Banners", though. As I believe it's a bit better.

And thanks for telling me about the bug with the ToC. I'll fix that ASAP!