frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
590•klaussilveira•11h ago•170 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
896•xnx•16h ago•544 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
93•matheusalmeida•1d ago•22 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
20•helloplanets•4d ago•13 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
26•videotopia•4d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
200•isitcontent•11h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
199•dmpetrov•11h ago•91 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
312•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
353•aktau•17h ago•176 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
22•romes•4d ago•2 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
354•ostacke•17h ago•92 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
458•todsacerdoti•19h ago•229 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
7•bikenaga•3d ago•1 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
80•quibono•4d ago•18 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
256•eljojo•14h ago•154 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
53•kmm•4d ago•3 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
390•lstoll•17h ago•263 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
231•i5heu•14h ago•177 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
120•SerCe•7h ago•98 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
136•vmatsiiako•16h ago•59 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•10h ago•12 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
12•neogoose•4h ago•7 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
25•gmays•6h ago•7 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
44•gfortaine•9h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
271•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1043•cdrnsf•20h ago•431 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
171•limoce•3d ago•90 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
60•rescrv•19h ago•22 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
89•antves•1d ago•64 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments
Open in hackernews

5 Things to Try with Gemini 3 Pro in Gemini CLI

https://developers.googleblog.com/en/5-things-to-try-with-gemini-3-pro-in-gemini-cli/
104•keithba•2mo ago

Comments

chis•2mo ago
Has anyone switched to Gemini CLI? It's so important but also exhausting keeping up with which model is the leading edge. Especially since every model has different idiosyncrasies you have to learn to work with it effectively.

Currently my ranking is

* Cursor composer: impressively fast and able but not tuned to be that agentic, so it's better for one-shot code changes than long-running tasks. Fantastic UI.

* Claude Code: Works great if you can set up a verifiable environment, a clear plan and set it loose to build something for an hour

* Grok: Similar to cursor composer but slower and more agentic. Not currently using.

* ChatGPT Codex, Gemini: Haven't tried yet.

all2•2mo ago
I just use claude code for most things. I'll fall back to a web UI (Grok, Claude, or Gemini, depending on what service I've exhausted) if I need to.
malnourish•2mo ago
I'm still using Roo Code with Litellm. I haven't yet found or heard a compelling reason to switch.
xnx•2mo ago
Use Gemini 3, Gemini CLI, and Antigravity.
bionhoward•2mo ago
model provider CLIs are a trap, less freedom of choice, less privacy, way more prohibitions buried in the fine print
bobson381•2mo ago
make a cli router like openrouter that just accepts your args and passes them to whichvever one is leading at the moment? could be fun.
embedding-shape•2mo ago
I haven't tried Gemini CLI with Gemini 3 Pro, but pretty much all the others. I usually run four agents at the same time, for each task, giving them the same prompt and then comparing their responses.

Gemini CLI has the lowest rate limits, lowest inability to steer the models (not sure that's a model or tooling thing, but I cannot get any of the Google models to stop outputting code comments constantly and everywhere) and seemingly the API frequently becomes unavailable for some reason.

Claude Code is fast, easy to steer, but the quality really degrades really quickly and randomly, seemingly by time of day. I'm not sure if they're running differently quanitized models during different times, but there is a clear quality difference depending on when in the day I use it, strangely. Haven't found a way of verifying this though, ideas welcome.

Codex CLI is probably what I use the most, with "gpt-5+high", which is kind of slow, a lot slower than Claude Code, but it almost always gets it right on the first try, and seemingly no other model+tool does instruction following as good, even if your AGENTS.md is almost overflowing with rules and requirements, it seems to nail things anyways.

joedevon•2mo ago
Codex has gotten kind of nerfed with their weird choice to limit loc read to 250 and dropping middle of context a lot. None of the CLIs are performing well for me right now. I'm codex and claude max btw. Disappointing.
nateb2022•2mo ago
> Gemini CLI has the lowest rate limits

For Gemini 3.0, the rate limits are very very generous. Google says rate limits refresh every five hours, and that only “a very small fraction of power users” will ever hit the limits.

pshirshov•2mo ago
I hit them constantly on AI Ultra subscription.
esafak•2mo ago
I think the TUI agents are pretty similar; I can use CC, Codex, Gemini, and opencode interchangeably.
NamlchakKhandro•2mo ago
once you have opencode, why would you bother with any of the others?
dinkleberg•2mo ago
Maybe these new releases bring some serious enhancements, but my experience with the Gemini cli has been dreadful. It craps out at least half of the time. When it works it is ridiculously fast so I keep trying it. But it has proven very inferior to the Claude code experience in my usage
renewiltord•2mo ago
Codex with gpt-5-high I trust to get things right without much effort. Claude is the best tool using agent out there. Very good at using the tools to ground whether changes are producing outcomes.
NamlchakKhandro•2mo ago
why would you bother with any of these when opencode exists?
NaomiLehman•2mo ago
I find Claude Code a bit better than opencode. I think it's not only about the model but orchestration/context handling.
recitedropper•2mo ago
Nice, without this thread I would never have known Gemini 3 released today.

Going to download Gemini CLI right now™ and see how it performs™ against Cursor, Claude Code, Aider, OpenCode, Droid, Warp, Devin, and ForgeCode.

cortesoft•2mo ago
There are currently 6 front page posts about Gemini 3 being released today.
ttoinou•2mo ago
It's not enough it should be at least 10
Oras•2mo ago
5 things to try (if you can try). They require ultra subscription
belter•2mo ago
And I would like to know why mods dont condensate them as duplicates...
dang•2mo ago
We sleep.
tekacs•2mo ago
I'm pretty sure that's the joke the GP is making.
ChrisArchitect•2mo ago
More discussion: https://news.ycombinator.com/item?id=45968043
embedding-shape•2mo ago
Correct me if I'm wrong, but in this demo video of the user instructing the model to use `git bisect` to find a commit (https://storage.googleapis.com/gweb-developer-goog-blog-asse...), doesn't this actually showcase a big issue with today's models?

In the end, the model only ran `git bisect` (if we're to believe the video at least) for various pointless reasons, it isn't being used for what it's usually used for. Why did it run bisect at all? Well, the user asked the LLM to use `git bisect` to find a specific commit, but that doesn't make sense, `git bisect` is not for that, so what the user is asking for, isn't possible.

Instead of the model stopping and saying "Hey, that's not the right idea, did you mean ... ?" so to ensure it's actually possible and what the user wants, the model runs its own race and start invoking a bunch of other git commands, because that's how you'd find that commit the user is looking for, and then finally does some git bisecting stuff just for fun, it had already found the right commit.

I think I see the same thing when letting LLMs code as well. If you give them some work to do that is actually impossible, but the words kind of make sense, and it'll produce something but not what you wanted, I think they're doing exactly the same thing, bypassing what you clearly instructed so they at least do something.

I'm not sure if I'm just hallucinating that they're acting like that, but LLMs doing "the wrong thing" has been hitting me more than once, and imagining something more dangerous than `do a git bisect`, it seems to me like that video is telling us Gemini 3 Pro will act exactly the same way, no improvements on that front.

Also, do these blog posts not go through review from engineering before they're published? Besides the video not really showcasing anything of interest, the prompt itself doesn't make any sense and would have been caught if a engineer who uses git at least weekly reviewed it before.

8organicbits•2mo ago
Looks right to me. At t=0:50 it shows other git bisect commands being run. The git biset reset at the end is ending bisection as it's complete.

Video is really a terrible format for terminal demos, you've got to pause it as the screen flashes text faster than you can read...

embedding-shape•2mo ago
> Looks right to me. At t=0:50 it shows other git bisect commands being run. The git biset reset at the end is ending bisection as it's complete.

But what is that actually doing? It looks like when it's running the git bisect, it already knows what the commit is, and could have just returned it. The only reason it ran any bisecting at all, was because the user (erroneously) asked it specifically to use git bisect. It didn't have to.

alecco•2mo ago
I don't like Javascript for CLI. I think OpenAI did the right thing by switching to Rust.
nateb2022•2mo ago
Considering a lot of people will be using Gemini on fullstack or frontend applications, it doesn't make sense to write the CLI in Rust and integrate with JS/TS separately, as opposed to writing it in TypeScript so that you can directly work with packages within the ecosystem designed for that ecosystem.
RealityVoid•2mo ago
Excuse my ignorance, but... What? It runs cli commands, why would the language it's written in matter for the integration? What am I missing here?
nateb2022•2mo ago
From a practicality standpoint, writing in TS allows you to execute arbitrary JS in-process with a simple exec() call; I have no doubt they'll use this to more deeply integrate agents with existing codebases in the near future. E.g. agents rather than just reading the code, will be able to directly import the data structures themselves, use libraries within the JS/TS ecosystem to parse code into an AST, and execute in-process test harnesses to validate behavior while editing.

And the MCP field is already pretty heavily saturated with TypeScript and JSONSchema, so using TS for it is a very ergonomic experience. Also since it's written in TypeScript, it's much more easy to integrate it with editors like VSCode (or Google's new Antimatter) which are built on top of Electron.

RealityVoid•2mo ago
Are they using it this way? I think if you have access to the cli you can do whatever you want, and it already does use tools and use libraries and parse code. It's just using the cli. Not to mention it's much better this way because I can see it doing all these things. I can see what commands it wants to run and it asks me about it.
NamlchakKhandro•2mo ago
no. i think your reasoning is flawed.

think about how it would do the same for python/golang/lua/etc.

I would never write a TUI like this and simply `eval` in the same scope as the TUI frames.

you shell out and run it with the language engine required.

Opencode has the ability to lean on LSP and Formatters, but these are not required to be written in the same language as the TUI

ipsum2•2mo ago
Wow, so excited to try!

> gemini

It seems like you don't have access to Gemini 3. Learn more at https://goo.gle/enable-preview-features To disable Gemini 3, disable "Preview features" in /settings. • 1. Switch to gemini-2.5-pro • 2. Stop Note: You can always use /model to select a different option.

Google never disappoints with their half-ass-launches.

navanchauhan•2mo ago
I believe this is because you are logged in. You can generate a free API key (with very low limits) through Google AI Studio and use that to test it.

In an ideal world, this workaround would not be needed

¯\_(ツ)_/¯

ionwake•2mo ago
or you know a normal world where google could get their act together tying together their different business logic
lefrenchy•2mo ago
I can't even sign in via the CLI, it's opened the browser window and had me sign in multiple times and can't proceed past that.
eptcyka•2mo ago
Why would you `git bisect` when you can `git log -p` and search for “dark”? Why is a marketing listicle on top of HN?
stabbles•2mo ago
You're absolutely right!
NaomiLehman•2mo ago
half of hackernews is AI-generated blogs and the other is marketing material. we have to accept it at this point.
api•2mo ago
That’s better than most other social sites.
dang•2mo ago
Related ongoing threads:

Gemini 3 Pro Preview Live in AI Studio - https://news.ycombinator.com/item?id=45967211 - Nov 2025 (385 comments)

Gemini 3 Pro Model Card - https://news.ycombinator.com/item?id=45963670 - Nov 2025 (263 comments)

thomasm6m6•2mo ago
https://github.com/google-gemini/gemini-cli/blob/release/v0....

> For Google AI Ultra subscribers and paid Gemini and Vertex API key holders, Gemini 3 Pro is already available and ready to enable. For everyone else, we're gradually expanding access through a waitlist.

So, not available yet. Tried with a free API key and I did not have access. I do have access on a paid API key, but I'd rather not use that with an agent. The rate limits page the docs link to currently has no info on gemini-3-pro rate limits. Seems to me this is really only for users on the $200/mo subscription plan. Somewhat odd, given the model is already GA in every other coding agent as I understand