frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
75•ColinWright•1h ago•41 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
21•surprisetalk•1h ago•18 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
121•AlexeyBrin•7h ago•24 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
102•alephnerd•2h ago•55 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
824•klaussilveira•21h ago•248 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
56•vinhnx•4h ago•7 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
53•thelok•3h ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
105•1vuio0pswjnm7•8h ago•121 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1058•xnx•1d ago•608 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
76•onurkanbkrc•6h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
478•theblazehen•2d ago•175 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
205•jesperordrup•11h ago•69 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
547•nar001•5h ago•253 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
216•alainrk•6h ago•335 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
35•rbanffy•4d ago•7 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
28•marklit•5d ago•2 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
3•momciloo•1h ago•0 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
4•valyala•1h ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
113•videotopia•4d ago•30 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
4•valyala•1h ago•0 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
73•speckx•4d ago•74 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
68•mellosouls•4h ago•73 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
273•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•111 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
285•dmpetrov•22h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
555•todsacerdoti•1d ago•268 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
43•matt_d•4d ago•18 comments
Open in hackernews

Show HN: Agno – A full-stack framework for building Multi-Agent Systems

https://github.com/agno-agi/agno
76•bediashpreet•8mo ago

Comments

JimDabell•8mo ago
> At Agno, we're obsessed with performance. Why? because even simple AI workflows can spawn thousands of Agents. Scale that to a modest number of users and performance becomes a bottleneck.

This strikes me as odd. Aren’t all these agents pushing tokens through LLMs? The number of milliseconds needed to instantiate a Python object and the number of kilobytes it takes up in memory seem irrelevant in this context.

sippeangelo•8mo ago
I'm really curious what simple workflows they've seen that span THOUSANDS of agents?!
bediashpreet•8mo ago
In general we instantiate one or even multiple agents per request (to limit data and resource access). At moderate scale, like 10,000 requests per minute, even small delays can impact user experience and resource usage.

Another example: there a large, fortune 10 company that has built an agentic system to sift through data in spreadsheets, they create 1 agent per row to validate everything in that row. You might be able to see how that would scale to thousands of agents per minute.

gkapur•8mo ago
If you are running things locally (I would think especially on the edge, whether on not the LLM is local or in the cloud) this would matter. Or if you are running some sort of agent orchestration where the output of LLMs is streaming it could possibly matter?
bediashpreet•8mo ago
You’re right, inference is typically the bottleneck and it’s reasonable to think the framework’s performance might not be critical. But here’s why we care deeply about it:

- High Performance = Less Bloat: As a software engineer, I value lean, minimal-dependency libraries. A performant framework means the authors have kept the underlying codebase lean and simple. For example: with Agno, the Agent is the base class and is 1 file, whereas with LangChain you'll get 5-7 layers of inheritance. Another example: when you install crewai, it installs the kubernetes library (along with half of pypi). Agno comes with a very small (i think <10 required dependencies).

- While inference is one part of the equation, parallel tool executions, async knowledge search and async memory updates improve the entire system's performance. Because we're focused on performance, you're guaranteed top of the line experience without thinking about it, its a core part of our philosophy.

- Milliseconds Matter: When deploying agents in production, you’re often instantiating one or even multiple agents per request (to limit data and resource access). At moderate scale, like 10,000 requests per minute, even small delays can impact user experience and resource usage.

- Scalability and Cost Efficiency: High-performance frameworks help reduce infrastructure costs, enabling smoother scaling as your user base grows.

I'm not sure why you would NOT want a performant library, sure inference is a part of it (which isn't in our control) but I'd definitely want to use libraries from engineers that value performance.

onebitwise•8mo ago
I feel the cookbook is a little messy. I would love to see an example using collaborative agents, like an editorial team that write articles based on searches and expert of topics (just as example)

Can be better to have a different repo for examples?

Btw great project! Kudos

maxtermed•8mo ago
Good point. The cookbook can be hard to navigate right now, but that's mostly because the team is putting out a tremendous amount of work and updating things constantly, which is a good problem to have.

This example might be close to what you're describing: https://github.com/agno-agi/agno/blob/main/cookbook/workflow...

It chains agents for web research, content extraction, and writing with citations.

I used it as a starting point for a couple projects that are now in production. It helped clarify how to structure workflows.

bediashpreet•8mo ago
Thank you for the feedback and the kind words.

Agree that the cookbooks have gotten messy. Not an excuse but sharing the root case behind it: we're building very, very fast and putting examples out for users quickly. We maintain backwards compatibility so sometimes you see 2 examples doing the same thing.

I'll make it a point to clean up the cookbooks and share more examples under this comment. Here are 2 to get started:

- Content creator team: https://github.com/agno-agi/agno/blob/main/cookbook/examples...

- Blog post generator workflow: https://github.com/agno-agi/agno/blob/main/cookbook/workflow...

Both are easily extensible. Always available for feedback at ashpreet[at]agno[dot]com

ElleNeal•8mo ago
I love Agno, they make it so easy to build agents for my Databutton application. Great work guys!!
bediashpreet•8mo ago
Thank you for the kind words <3
LarsenCC•8mo ago
This is awesome!
bediashpreet•8mo ago
<3
idan707•8mo ago
Over the past few months, I've transitioned to using Agno in production, and I have to say, the experience has been nothing short of fantastic. A huge thank you for creating such an incredible framework!
bediashpreet•8mo ago
Thank you for the kind words <3
lerchmo•8mo ago
One thing I don’t understand about these agent frameworks… cursor, Claude, Claude code, cline, v0… all of the large production agents with leaked prompts use xml function calling, and it seems like these frameworks all only support native json schema function calling. This is maybe the most important decision and from my experience native tool calling is just about the worst option.
maxtermed•8mo ago
I've been using this framework for a while, it's really solid IMO. It abstracts just enough to make building reliable agents straightforward, but still leaves lots of room for customization.

The way agent construction is laid out (with a clear path for progressively adding tools, memory, knowledge, storage, etc.) feels very logical.

Definitely lowered the time it takes to get something working.

bediashpreet•8mo ago
Thank you for using Agno and the kind words!
bosky101•8mo ago
Your first 2 examples on your readme involve single agents. These are a waste of time. We don't need yet another llm api call wrapper. An agentic system with just 1 tool / agent is pointless.

Thankfully your third example half way down does have an eg with 3 agents. May have helped to have a judge/architect agent.

Not clear about the infra required or used.

Would help to have helper functions to get and set session state/memory. Being able to bootstrap from json could be a good feature.

Would help to have diff agents with diff llms to show that you have thought things through.

Why should spawning 1000's of agents even be in your benchmark. Since when did we start counting variables. Maybe saying each agent would take X memory/ram would suffice - because everything is subjective, can't be generalized.

Consider a rest api that can do what the examples did via curl?

Good luck!

fcap•8mo ago
In my opinion to really lift off here you need to make sure we can use these agents in production. That means the complete supply chain has to be considered. The deployment part is the heavy part and most people can run it locally. So if you close that gap people will be able to mass adopt. I am totally fine if you monetize it as a cloud service but give a full docs from code, test monitoring to deployment. And one more thing. Show what the framework is capable of. What can I do. Lots of videos and use cases here. Every single second needs to be pushed out.