frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Neomacs: Rewriting the Emacs display engine in Rust with GPU rendering via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•1m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•5m ago•0 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
1•m00dy•7m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•8m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•14m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•17m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•17m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•18m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•19m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•20m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•20m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•20m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•25m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•25m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•26m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•26m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•34m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•35m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•37m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•37m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•37m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•38m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•38m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•39m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•39m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•40m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•41m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•41m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•44m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•45m ago•1 comments
Open in hackernews

Preventing Flash of Incomplete Markdown when streaming AI responses

https://engineering.streak.com/p/preventing-unstyled-markdown-streaming-ai
34•biot•8mo ago

Comments

sim7c00•8mo ago
fun read, its weird interacting with chatgpt around markdown sometimes.

it formats its own stuff with markdown, so if i ask it for markdown but dont explicitly specify a downloadable file, it will produce valid markdown up to where it conflicts with its own markdown, and then it gets choppy and chunked.

its an issue of my prompting is what im sure some customer service rep would be told to tell me :p because theres money to be made in courses for prompting skills perhaps, idk. (cynical view).

sure is enjoyable to struggle together with the AI to format its responses correctly :'D

porridgeraisin•8mo ago
You can ask for it to put the markdown in a codeblock. It works well for me. It also works with latex.
monkeycantype•8mo ago
I also ask for any nested codeblocks to be delimited with ~~~, not ``` so as not to break out of the code block
kherud•8mo ago
Is there a general solution to this problem? I assume you can only start buffering tokens once you see a construct, for which there are continuations, that once completed, would lead to the previous text being rendered differently. Of course you don't want to keep buffering for too long, since this would defeat the purpose of streaming. And you never know if the potential construct will actually be generated. Also, the solution probably has to be more context sensitive. For example, within code blocks, you'll never want to render links for []() constructs.

EDIT: One library I found is https://github.com/thetarnav/streaming-markdown which seems to combine incremental parsing with optimistic rendering, which works good enough in practice, I guess.

biot•8mo ago
There are a few things in our implementation that make a more general solution unnecessary. We only need the output to support a limited set of markdown which is typically text, bullet points, and links. So we don't need code blocks (yet).

However, the second thing (not mentioned in the post) is that we are not rendering the markdown to HTML on the server, so []() markdown is sent to the client as []() markdown, not converted into <a href=...>. So even if a []() type link exists in a code block, that text will still be sent to the client as []() text, only sent in a single chunk and perhaps with the link URL replaced. The client has its own library to render the markdown to HTML in React.

Also, the answers are typically short so even if OpenAI outputs some malformed markdown links, worst case is that we end up buffering more than we need to and the user experiences a pause after which the entire response is visible at once (the last step is to flush any buffered text to the client).

kristopolous•8mo ago
This exact problem is why I wrote Streamdown https://github.com/day50-dev/Streamdown

Almost every model has a slight but meaningfully different opinion on what markdown is and how creative they can be with it.

Doing it well is a non-trivial problem.

munch117•8mo ago
Generating simple HTML instead of markdown would have been a solution. But I guess that ship has sailed.
graboy•8mo ago
Yes. You can define a regex matching what you want, and every regex can be compiled into a state machine (https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...). Then at each character you make a step in your state machine. You pause the output while the regex is not matching.
woah•8mo ago
Could this result in edge cases with [ where due to some misformatting or intentional syntax that looks like the start of a markdown link, the entire response is hidden from the user?

(This comment when subjected to this processing could look like: "Could this result in edge cases with ")

biot•8mo ago
If you buffer starting with the ( character, then you'd still send the [text] part of the link, and worst case is that with no matching ) character to close the link, you end up buffering the remainder of the response. Even still, the last step is "flush any buffered text to the client", so the remainder of the response will be transmitted eventually in a single chunk.

There are some easy wins that could improve this further: line endings within links are generally not valid markdown, so if the code ever sees \n then just flush buffered text to the client and reset the state to TEXT.

impure•8mo ago
I do something like this too because links in emails are insanely long. It's worse in marketing emails. So I shorten the links to save on tokens and expand them again when I get the response back from the LLM.