frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

TUI Studio – visual terminal UI design tool

https://tui.studio/
278•mipselaer•5h ago•166 comments

Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas

https://www.getspine.ai/
37•a24venka•2h ago•38 comments

I traced $2B in grants and 45 states' lobbying behind age‑verification bills

https://old.reddit.com/r/linux/comments/1rshc1f/i_traced_2_billion_in_nonprofit_grants_and_45/
733•shaicoleman•5h ago•292 comments

Bucketsquatting is (finally) dead

https://onecloudplease.com/blog/bucketsquatting-is-finally-dead
230•boyter•7h ago•118 comments

Willingness to look stupid

https://sharif.io/looking-stupid
573•Samin100•4d ago•195 comments

Monster Is the Machine

https://kirkcenter.org/reviews/monster-is-the-machine/
19•freediver•4d ago•2 comments

E2E encrypted messaging on Instagram will no longer be supported after 8 May

https://help.instagram.com/491565145294150
140•mindracer•2h ago•54 comments

Okmain: How to pick an OK main colour of an image

https://dgroshev.com/blog/okmain/
133•dgroshev•4d ago•29 comments

Run NanoClaw in Docker Sandboxes

https://nanoclaw.dev/blog/nanoclaw-docker-sandboxes/
73•outofdistro•2h ago•24 comments

Executing programs inside transformers with exponentially faster inference

https://www.percepta.ai/blog/can-llms-be-computers
210•u1hcw9nx•1d ago•69 comments

The Mrs Fractal: Mirror, Rotate, Scale

https://www.4rknova.com//blog/2025/06/22/mrs-fractal
9•ibobev•4d ago•1 comments

Dijkstra's Crisis: The End of Algol and Beginning of Software Engineering (2010) [pdf]

https://www.tomandmaria.com/Tom/Writing/DijkstrasCrisis_LeidenDRAFT.pdf
34•ipnon•4d ago•3 comments

Show HN: What was the world listening to? Music charts, 20 countries (1940–2025)

https://88mph.fm/
65•matteocantiello•2d ago•29 comments

What we learned from a 22-Day storage bug (and how we fixed it)

https://www.mux.com/blog/22-day-storage-bug
23•mmcclure•3d ago•2 comments

“This is not the computer for you”

https://samhenri.gold/blog/20260312-this-is-not-the-computer-for-you/
736•MBCook•14h ago•291 comments

Ceno, browse the web without internet access

https://ceno.app/en/index.html?
98•mohsen1•9h ago•25 comments

Parallels Confirms MacBook Neo Can Run Windows in a Virtual Machine

https://www.macrumors.com/2026/03/13/macbook-neo-runs-windows-11-vm/
15•tosh•1h ago•8 comments

ATMs didn’t kill bank teller jobs, but the iPhone did

https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller
482•colinprince•1d ago•495 comments

Source code of Swedish e-government services has been leaked

https://darkwebinformer.com/full-source-code-of-swedens-e-government-platform-leaked-from-comprom...
151•tavro•6h ago•136 comments

Two long-lost episodes of 'Doctor Who' have been found

https://apnews.com/article/doctor-who-lost-episodes-found-daleks-6849b09faa6eca9377b2a0db45d47ff8
9•cf100clunk•42m ago•2 comments

IMG_0416 (2024)

https://ben-mini.com/2024/img-0416
161•TigerUniversity•4d ago•36 comments

Gvisor on Raspbian

https://nubificus.co.uk/blog/gvisor-rpi5/
30•_ananos_•5h ago•8 comments

Vite 8.0 Is Out

https://vite.dev/blog/announcing-vite8
450•kothariji•11h ago•146 comments

An old photo of a large BBS (2022)

https://rachelbythebay.com/w/2022/01/26/swcbbs/
245•xbryanx•20h ago•163 comments

Enhancing gut-brain communication reversed cognitive decline in aging mice

https://med.stanford.edu/news/all-news/2026/03/gut-brain-cognitive-decline.html
354•mustaphah•23h ago•168 comments

Bubble Sorted Amen Break

https://parametricavocado.itch.io/amen-sorting
368•eieio•22h ago•114 comments

Understanding the Go Runtime: The Scheduler

https://internals-for-interns.com/posts/go-runtime-scheduler/
147•valyala•4d ago•27 comments

Shall I implement it? No

https://gist.github.com/bretonium/291f4388e2de89a43b25c135b44e41f0
1436•breton•18h ago•518 comments

The Met releases high-def 3D scans of 140 famous art objects

https://www.openculture.com/2026/03/the-met-releases-high-definition-3d-scans-of-140-famous-art-o...
322•coloneltcb•1d ago•65 comments

US private credit defaults hit record 9.2% in 2025, Fitch says

https://www.marketscreener.com/news/us-private-credit-defaults-hit-record-9-2-in-2025-fitch-says-...
408•JumpCrisscross•1d ago•442 comments
Open in hackernews

Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas

https://www.getspine.ai/
37•a24venka•2h ago
Hey HN! We're Ashwin and Akshay from Spine AI (https://www.getspine.ai).

Spine Swarm is a multi-agent system that works on an infinite visual canvas to complete complex non-coding projects: competitive analysis, financial modeling, SEO audits, pitch decks, interactive prototypes, and more.

Here's a video of Spine Swarm in action: https://youtu.be/R_2-ggpZz0Q

We've been friends for over 13 years. We took our first ML course together at NTU, in a part of campus called North Spine, which is where the name comes from. We went through YC in S23 and have spent about 3 years building Spine across many product iterations.

The core idea: chat is the wrong interface for complex AI work. It's a linear thread, and real projects aren't linear. Sure, you can ask a chatbot to reference the financial model from earlier in the thread, or run research and market sizing together, but you're trusting the model to juggle that context implicitly. There's no way to see how it's connecting the pieces, no way to correct one step without rerunning everything, and no way to branch off and explore two strategies side by side. ChatGPT was a demo that blew up, and chat stuck around as the default interface, not because it's the right abstraction. We thought humans and agents needed a real workspace where the structure of the work is explicit and user-controllable, not hidden inside a context window.

So we built an infinite visual canvas where you think in blocks instead of threads. Each block is our abstraction on top of AI models. There are dedicated block types for LLM calls, image generation, web browsing, apps, slides, spreadsheets, and more. Think of them as Lego bricks for AI workflows: each one does something specific, but they can be snapped together and composed in many different ways. You can connect any block to any other block, and that connection guarantees the passing of context regardless of block type. The whole system is model-agnostic, so in a single workflow you can go from an OpenAI LLM call, to an image generation mode like Nano Banana Pro, to Claude generating an interactive app, each block using whatever model fits best. Multiple blocks can fan out from the same input, analyzing it in different ways with different models, then feed their outputs into a downstream block that synthesizes the results.

The first version of the canvas was fully manual. Users entered prompts, chose models, ran blocks, and made connections themselves. It clicked with founders and product managers because they could branch in different directions from the same starting point: take a product idea and generate a prototype in one branch, a PRD in another, a competitive critique in a third, and a pitch deck in a fourth, all sharing the same upstream context. But new users didn't want to learn the interface. They kept asking us to build a chat layer that would generate and connect blocks on their behalf, to replicate the way we were using the tool. So we built that, and in doing so discovered something we didn't expect: the agents were capable of running autonomously for hours, producing complete deliverables. It turned out agents could run longer and keep their context windows clean by delegating work to blocks and storing intermediary context on the canvas, rather than holding everything in a single context window.

Here's how it works now. When you submit a task, a central orchestrator decomposes it into subtasks and delegates each to specialized persona agents. These agents operate on the canvas blocks and can override default settings, primarily the model and prompt, to fit each subtask. Agents pick the best model for each block and sometimes run the same block with multiple models to compare and synthesize outputs. Multiple agents work in parallel when their subtasks don't have dependencies, and downstream agents automatically receive context from upstream work. The user doesn't configure any of this. You can also dispatch multiple tasks at once and the system will queue dependent ones or start independent ones immediately.

Agents aren't fully autonomous by default. Any agent can pause execution and ask the user for clarification or feedback before continuing, which keeps the human in the loop where it matters. And once agents have produced output, you can select a subset of blocks on the canvas and iterate on them through the chat without rerunning the entire workflow.

The canvas gives agents something that filesystems and message-passing don't: a persistent, structured representation of the entire project that any agent can read and contribute to at any point. In typical multi-agent systems, context degrades as it passes between agents. The canvas addresses this because agents store intermediary results in blocks rather than trying to hold everything in memory, and they leave explicit structured handoffs designed to be consumed efficiently by the next agent in the chain. Every step is also fully auditable, so you can trace exactly how each agent arrived at its conclusions.

We ran benchmarks to validate what we were seeing. On Google DeepMind's DeepSearchQA, which is 900 questions spanning 17 fields, each structured as a causal chain where each step depends on completing the previous one, Spine Swarm scored 87.6% on the full dataset with zero human intervention. For the benchmark we used a subset of block types relevant to the questions (LLM calls, web browsing, table) and removed irrelevant ones like document, spreadsheet, and slide generation. We also disabled human clarification so agents ran fully independently. The agents were not just auditable but also state of the art. The auditability also exposed actual errors in an older benchmark (GAIA Level 3), cases where the expected answer was wrong or ambiguous, which you'd never catch with a black-box pipeline. We detail the methodology, architecture, and benchmark errors in the full writeup: https://blog.getspine.ai/spine-swarm-hits-1-on-gaia-level-3-...

Benchmarks measure accuracy on closed-ended questions. Turns out the same architecture also leads to better open-ended outputs like decks, reports, and prototypes with minimal supervision. We've seen early users split into two camps: some watch the agents work and jump in to redirect mid-flow, others queue a task and come back to a finished deliverable. Both work because the canvas preserves the full chain of work, so you can audit or intervene whenever you want.

A good first task to try: give it your website URL and ask for a full SEO analysis, competitive landscape, and a prioritized growth roadmap with a slide deck. You'll see multiple agents spin up on the canvas simultaneously. People have also used it for fundraising pitch decks with financial models, prototyping features from screenshots and PRDs, competitive analysis reports and deep-dive learning plans that research a topic from multiple angles and produce structured material you can explore further.

Pricing is usage-based credits tied to block usage and the underlying models used. Agents tend to use more credits than manual workflows because they're tuned to get you the best possible outcome, which means they pick the best blocks and do more work. Details here: https://www.getspine.ai/pricing. There's a free tier, and one honest caveat: we sized it to let you try a real task, but tasks vary in complexity. If you run out before you've had a proper chance to explore, email us at founders@getspine.ai and we'll work with you.

We'd love your feedback on the experience: what worked, what didn't, and where it fell short. We're also curious how others here approach complex, multi-step AI work beyond coding. What tools are you using, and what breaks first? We'll be in the comments all day.

Comments

sebmellen•1h ago
Just as a tiny first piece of feedback, the main marketing website is very hard to understand or grok without a demo of how the tool works. Even just the quick YouTube video that you added in your post here, if embedded, would make a difference.

There are so many "agentic tools" out there that it's really hard to see what differentiates this just based on the website.

a24venka•1h ago
Thanks for the feedback! Definitely agree that we could do more with the marketing site. We're working on a gallery page to showcase some demos.
stuckkeys•1h ago
That is a bold claim for a wrapper lol
mlnj•1h ago
Elaborate?
jpbryan•1h ago
Why do I need a canvas to visualize the work that the agents are doing? I don't want to see their thought process, I just want the end product like how ChatGPT or Claude currently work.
a24venka•1h ago
That is definitely a valid way of using Spine as well. You can just work in the chat and consume the deliverables similar to how you would in other tools.

The canvas helps when you want to trace back why an output wasn't what you expected, or if you're curious to dig deeper.

Even beyond auditability, the canvas also helps agents do better work: they can generate in parallel, explore branches, and pass context to each other in a structured way (especially useful for longer-running tasks).

dude250711•1h ago
Dark UI pattern: pretends that it is immediately usable only to redirect for sign-up.
a24venka•1h ago
Fair point, we should be more upfront about the sign-up step. Given that tasks are long-running and token-intensive, we do need an auth barrier to protect against abuse, but we can definitely do a better job signaling that before you hit the canvas.
garciasn•1h ago
Or, just show us in an animated GIF how the product works in practice. Then, should we somehow find benefit in a visual representation of a swarm's workflow, we could sign up rather than having to, unintuitively, scroll down to watch a YouTube video.

e: 'be' to 'we'; oops.

a24venka•55m ago
Good call and noted. We're working on making the product experience more visible upfront.
gravity2060•1h ago
Is it possible to build self-improving swarm loops? (ie swarm x builds a thing, swarm y critiques and improved x’s work, repeat…)
a24venka•1h ago
We've only partially explored this so far, but it's a great suggestion.

The canvas architecture naturally supports this kind of loop since agents can already read and build on each other's outputs — so the plumbing is there, it's more about building the right orchestration on top. Definitely something we're exploring.

gravity2060•1h ago
In the demo video you shared (yt link) how many credits did that whole project take? What is the prices to fix elements of it (for example of you dislike a minor aspect of the generated spreadsheet do follow up instructions utilize only the narrow subset of agents that has been demoed to that subtask, or does it create new agents who have to create new context in the narrow follow up task?)
a24venka•1h ago
Credits are consumed by the blocks that get generated, not by the agents themselves. Some blocks are cheaper than others. A simple prompt or image block is a single model call, while browser use or deliverable blocks like documents and spreadsheets run models in a loop and cost more. Blocks also cost more when they have more blocks connected to them (more input tokens).

In the demo video I shared, the task cost about ~7,000 credits since it ran around 10 BrowserUse blocks and produced multiple deliverables.

If you want to fix a specific block (or set of blocks), you can select them and the chat will scope itself to primarily work on those. In that case fewer blocks run, so it's cheaper.

nusl•50m ago
7000 credits, ouch. The tool is really cool, I do think it's super useful. I also like the swarm particle animations in the backround.
esafak•1h ago
Is the value prop that I can see what the agent is doing? This is not the way: https://youtu.be/R_2-ggpZz0Q?t=158

How am I supposed to get anything out of this? Consider that agents are going to get faster and run more and more tasks in parallel. This is not manageable for a human to follow in real time. I can barely keep up with one agent in real-time, let alone a swarm.

What I could see being useful is if you monitored the agents and notified me when one is in the middle of something that deserves my attention.

a24venka•57m ago
This is a fair point, we are exploring progressive disclosure on the canvas to better utilize the space and make the key artifacts more readily visible. We do have other panels (the chat, task and deliverable) that have alternate views of what the agent did and the key deliverables.

Beyond human auditability, the canvas helps the agents do a better job by generating in parallel, exploring branches and passing context to each other in a structured way.

BloondAndDoom•55m ago
I didn’t read the post, I checked out the website just like 99% of the people will do.

Simple advice, if you are selling a product with a selling point of being visual, show it on your website. Not in a YouTube video but actual screenshots, short cut 10 sec video/gif

a24venka•53m ago
Definite miss on our part, we're working on making the product experience more visible upfront on our landing page.
salomonk_mur•33m ago
Friend, in the age of AI and even more so if you are selling an AI product, all you need is literally 2 screenshots and one prompt.
metalliqaz•17m ago
There is an inverse relationship between how obviously useful a product is and how easy it is to produce screenshots.
onion2k•24m ago
It's a shame the team don't have access to a product that would automatically research and implement what's needed on an AI product website.
pqs•53m ago
I had to read this text in order to understand what this tool does, because I could not know from the website (without watching a video). You should use Spine to improve your website. ;-)
gravity2060•35m ago
What does it mean to say 30,000 monthly credits and 1500 daily refresh credits? If my project takes 7000 credits (the way your demo does) then does that mean I couldn’t actually do it on the lowest available pricing plan because I couldn’t use 7000 credits in one run? If this is the case, what am abysmal pricing model!
a24venka•29m ago
The daily refresh isn't a cap on usage, it's additional credits you get each day (resets to 1,500 nightly regardless of use).

You can use your full 30k balance in a single run if needed. The daily refresh just tops you back up over time so you're not waiting for a monthly reset.

aleda145•34m ago
Super cool!

I'm completely sold on the canvas layer. Embracing non linearity is such a boon when you're on the ideas stage. When you have verified it though, moving it to another medium (a document, presentation or just code) is often the best choice.

Do you see the canvases created with Spine as "one off" that you discard when you have got your deliverable, or as something living that you keep around?

I'm building a side project for running SQL on a canvas (kavla.dev), so I'm thinking about canvas workflows all the time!

a24venka•24m ago
Thanks! Great question. We see canvases as living workspaces, you can revisit, iterate on, and build on them over time.

But the deliverables (docs, slides, code) are first-class outputs you can export and use independently. So it works both ways depending on the workflow.

Kavla looks cool, canvas-based SQL is a great use case for this kind of thinking!

aleda145•18m ago
Nice! I'll make sure to try out Spine this weekend, if you want detailed feedback feel free to email me. You can find it in my profile.
woeirua•32m ago
Interesting idea, I wanted to see an example of the agents working on a canvas when I opened your page. I saw nothing of the sort. Sorry, but immediate fail.

This may be too harsh, but you need to make it immediately clear to someone today why they can't just have Claude Code one shot your app!

a24venka•27m ago
This is good feedback and definitely something we are improving.
airstrike•22m ago
Congrats on the launch! Meta comment, but I just ain't reading all of the above. You need to be able to explain this in about 20% the number of words or you'll lose people, especially VC.

My advice is to start with "Spine Swarm solves _____" then how, then why you're different. 3 short paragraphs, preferably 1-2 sentences each.

a24venka•8m ago
Agreed. We will make sure this comes through in our website.
TheTaytay•22m ago
I think this is really neat. You should probably take it as a compliment that the biggest criticisms so far are about the website landing page. ;)

I like canvases in general, and I especially like them for mentally organizing and referring to this sort of broad work. (Honestly, I think zoomable canvases would make a better window manager in general, but I digress)

One small piece of friction: My default mouse-based ways of dragging the canvas around (that work in most canvases like Figma) aren't working. I saw that you had a tutorial, and I have learned to hold space now, but I prefer the "hold middle mouse button to drag my canvas view around".

I've got a couple of research tasks running now, and my current open questions as a very new user are: 1) How easy will it be to store the outputs into a Github repository. 2) How easy will it be to refer back to this later? 3) Can I build upon it manually or automatically? 4) Can I (securely) share it with someone else for them to see and build upon it? 5) Can I do something "locally" with it? Not necessarily the model, but my preferred interface for LLMs at this point is Claude Code. Could I have a Claude Code instance running in one of these boxes somehow? 6) What if I want to do private stuff with it and don't like the traffic going through Spine's servers? Could I pay them for the interface, but bring my own keys? (Related: Can I self host somehow?) 7) When this is done, each artifact it found (screenshot, webpage, etc), is going to be helpful. The data-hoarder in me wants to make sure I can search these later. Heck, if I could do that, this would become my preferred "web browser". (But again, I digress.)

a24venka•9m ago
Really appreciate the detailed feedback and questions! And yes, we'll take the website criticism as a compliment :)

Good callout on the canvas navigation, we'll look into middle mouse button support.

To answer your questions: 1) GitHub integration is on our roadmap. Right now you can export outputs manually but we want to make this seamless. 2) All your canvases are saved and you can search them by name in your dashboard. We're also working on a dedicated section for deliverables across canvases. 3) Yes to both! You can manually add or edit blocks, or kick off new agent runs that build on existing work. 4) You can currently only share public links of your canvas to others (but you can make it private at any point). We are testing out a teams feature which allows you to share canvases with members on your team securely. Beyond that, we are working on adding roles and email-based sharing controls which is in our roadmap. 5) Claude Code in a block is a really interesting idea. We don't support that today but we're thinking about computer use and coding workflows. 6)BYOK (bring your own keys) is something we've heard interest in and are considering. Self-hosting isn't available right now, though we do support private deployments for enterprise customers if that's ever relevant. 7) Love the 'preferred web browser' framing. Right now you can search canvases but searchable artifacts across canvases is definitely where we want to head.

Thanks for giving it a real spin, this kind of feedback is incredibly valuable.

vivzkestrel•21m ago
excuse my memory at this point, arent there like a 100 of these posted on HN every month that all have something to do with multi agent collaboration that support 1000 models?
visekr•18m ago
whoa congrats on the launch. lol I launched my visual canvas for agents today too. I went in a more of a collaborative canvas IDE, agent orchestration direction. But very cool to see your take on it

https://getmesa.dev is mine

embedding-shape•13m ago
Rather than just finding a way to link your own product, why don't you do the rest of us favor and provide a comparison at least, so it becomes a tiny bit informative instead of just spammy?

Nothing wrong with sharing your own stuff, but at least contribute something back to the submission you're commenting on.

johnyzee•15m ago
Calling it a 'canvas' makes me think that this tool is about AI agents doing some kind of collaborative drawing. Looking at the vid though, it seems more like an environment for visually organizing and managing agentic work (which seems very cool, and quite a bit more than just a canvas).