frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
567•klaussilveira•10h ago•160 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
885•xnx•16h ago•538 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
89•matheusalmeida•1d ago•20 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
16•helloplanets•4d ago•8 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
16•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
195•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
197•dmpetrov•11h ago•88 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
305•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
352•aktau•17h ago•173 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
348•ostacke•16h ago•90 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
450•todsacerdoti•18h ago•228 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
77•quibono•4d ago•16 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
50•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
247•eljojo•13h ago•150 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
384•lstoll•17h ago•260 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
10•neogoose•3h ago•6 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
227•i5heu•13h ago•173 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
66•phreda4•10h ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
112•SerCe•6h ago•90 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
134•vmatsiiako•15h ago•59 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
23•gmays•5h ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
42•gfortaine•8h ago•12 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
263•surprisetalk•3d ago•35 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
165•limoce•3d ago•87 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1037•cdrnsf•20h ago•429 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
58•rescrv•18h ago•22 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
86•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
22•denysonique•7h ago•4 comments
Open in hackernews

Launch HN: Nao Labs (YC X25) – Cursor for Data

158•ClaireGz•9mo ago
Hey HN, we’re Claire and Christophe from nao Labs (https://getnao.io/). We just launched nao, an AI code editor to work with data: a local editor, directly connected with your data warehouse, and powered by an AI copilot with built-in context of your data schema and data-specific tools.

See our demo here: https://www.youtube.com/watch?v=QmG6X-5ftZU

Writing code with LLMs is the new normal in software engineering. But not when it comes to manipulating data. Tools like Cursor don’t interact natively with data warehouses — they autocomplete SQL blindly, not knowing your data schema. Most of us are still juggling multiple tools: writing code in Cursor, checking results in the warehouse console, troubleshooting with an observability tool, and verifying in BI tool no dashboard broke.

When you want to write code on data with LLMs, you don’t care much about the code, you care about the data output. You need a tool that helps you write code relevant for your data, lets you visualize its impact on the output, and quality check it for you.

Christophe and I have each spent 10 years in data — Christophe was a data engineer and has built data platforms for dozens of orgs, I was head of data and helped data teams building their analytics & data products. We’ve seen how the business asks you to ship data fast, while you’re here wondering if this small line of code will mistakenly multiply the revenue on your CEO dashboard by x5. Which leaves you 2 choices: test extensively and ship slow. Not test and ship fast. That’s why we wanted to create nao: a tool really adapted to our data work, that would allow data teams to ship at business pace.

nao is a fork of VS Code, with built-in connectors for BigQuery, Snowflake, and Postgres. We built our own AI copilot and tab system, gave them a RAG of your data warehouse schemas and of your codebase. We added a set of agent tools to query data, compare data, understand data tools like dbt, assess the downstream impact of code in your whole data lineage.

The AI tab and the AI agent write straight away code matching your schema, may it be for SQL, python, yaml. It shows you code diffs and data diffs side by side, to visualize what your change did to the data output. And you can leave the data quality checks to the agent: detect missing or duplicated values, outliers, anticipate breaking changes downstream or compare dev and production data differences.

Data teams usually use nao for writing SQL pipelines, often with dbt. It helps them create data models, document them, test them, while making sure they’re not breaking data lineage and figures in the BI. In run mode, they also use it to run some analytics, and identify data quality bugs in production. For less technical profiles, it’s also a great help to strengthen their code best practices. For large teams, it ensures that the code & metrics remain well factorized and consistent.

Software engineers use nao for the database exploration part: write SQL queries with nao tab, explore data schema with the agent, and write DDL.

Question we often get is: why not just use Cursor and MCPs? Cursor has to trigger many MCP calls to get full context of the data, while nao has it always available in one RAG. MCPs stay in a very enclosed part of Cursor: they don’t bring data context to the tab. And they don’t make the UI more adapted to data workflows. Besides, nao comes as pre-packaged for data teams: they don’t have to set up extensions, install and authenticate in MCPs, build CI/CD pipelines. Which means even non-technical data teams can have a great developer experience.

Our long-term goal is to become the best place to work with data. We want to fine-tune our own models for SQL, Python and YAML to give the most relevant code suggestions for data. We want to enlarge our comprehension of all data stack tools, to become the only agnostic editor for any of your data workflow.

You can try it here: https://sunshine.getnao.io/releases/ - download nao, sign up for free and start using it. Just for HN Launch, you can create a temporary account with a simple username if you’d prefer not to use your email. For now, we only have Mac version but Linux and Windows are coming.

We’d love to hear your feedback — and get your thoughts on how we can improve even further the data dev experience!

Comments

EliBullockPapa•9mo ago
Cool idea! How did you train your tab model? Fill in the middle or is it based on edit history like cursor? Someone posted this yesterday and I found it fascinating https://www.coplay.dev/blog/a-brief-history-of-cursor-s-tab-...
ClaireGz•9mo ago
Yes we use Fill in the middle models (Mistral and our own trained Qwen). And we feed it with your data context - we have our own SQL parser to feed you with the right data schema context depending on where your cursor is in the query.
paddy_m•9mo ago
I hadn't realized you trained your own model. That's an important differentiator. How do you get training data of schemas in the wild?
jhashemi•9mo ago
add dataform support please for us Google/BigQuery native orgs :-)
ClaireGz•9mo ago
Yes it's in our roadmap, some users already asked for it!
jhashemi•9mo ago
also how does it do with transitive joins across multiple tables that may not have FK/PK relationships? Other key features that would put this over the top: Usage analysis and query rewriting for inefficient already existing queries.
ClaireGz•9mo ago
For the joins, we give the right context so that the model can infer the relationships: the schema of each table + how the joins are already done in your repository/query history. Usage analysis is definitely on the roadmap: we want to access the logs of the data warehouse to mesure usage of each table.
pomarie•9mo ago
Great idea! How does your tab model compare to other ones from Cursor/Windsurf..?
ClaireGz•9mo ago
When it comes to SQL writing we are more relevant, when it comes to speed this is hard to benchmark exactly against Cursor and Windsurf but we are a bit slower (around ~600ms on average) obviously and we know what we have to improve to speed it up.

Next in the list is the next edit suggestion dedicated to data work, especially with dbt (or SQL transformations) where when you change a query you have to change the downstream queries directly.

nathan_douglas•9mo ago
This is really slick. I watched the YouTube video (a couple of times; I didn't grok what was happening immediately) and I really love how this accelerates feedback cycles. Very, very cool.
ClaireGz•9mo ago
Thank you! Actually this is exactly what we target, we've seen that data teams have often a longer feedback loop than software engineers. That's a goal for us to shorten it and to bring data the closest to your dev flow.
badotnet•9mo ago
I've met one of the founders, Christophe. Smart, perfect vision and huge energy. I can say that I have no doubt they'll succeed with Nao! Congrats!
blef•9mo ago
*blushes*
ClaireGz•9mo ago
Thanks for the kind comments, he's surely a great guy :)
christoribeiro•9mo ago
That’s exactly what I was looking for months ago. I will check out Nao for sure.
ClaireGz•9mo ago
Great! Let us know if it's how you imagined it when you try it
mrfumier•9mo ago
Awesome product!
ClaireGz•9mo ago
Thank you!
dennisy•9mo ago
Does this mean we will have people “vibe coding” data warehouses now? Might cause a few issues…
ClaireGz•9mo ago
We say "data vibing" so it feels unique to the data community! But in all seriousness, this is already an issue, people are already asking ChatGPT (or Cursor/whatever else) to generate SQL for them, but the next steps do not exist, if you "vibe code" for data you want to have the easiest feedback loop you can get to check if the output is good, and that's what we are working on: identifying the downstream impacts in the IDE and proposing fixes, a table diff view, new UI/UX to test your outputs.

The goal for us is to be the best way to do data with AI.

dennisy•9mo ago
Ok, but how do you know it’s good?

With data I think that is very hard, I wrote a SQL query (without AI) which ran and showed what look like correct numbers only to years later realise it was incorrect.

When doing more complex calculations, I am not clear how to check if the output is correct.

ClaireGz•9mo ago
Usually what we've seen is data people having notebooks/worksheets on the side with a bunch of manual SQL queries that they run to validate the data consistency. The process is highly manual, time consuming. Most of the time teams knows what kind of checks they want to run on the data to validate it, our goal here is to provide them the best toolbox to do it, in the IDE.

Tho, i'd say this is like when writing tests in software, you can't catch everything the first time (even when going 100% code coverage), especially in data when most of the time it breaks because of upstream producers.

It will still require live observability tools monitoring data live in the near future.

redox_•9mo ago
Well done!
ClaireGz•9mo ago
Thank you!
coatue•9mo ago
Would this work with Hydra? https://news.ycombinator.com/item?id=43937852
ClaireGz•9mo ago
we support Postgres (and DuckDB is coming very soon) so yes probably as Hydra is a mix of both, but I have to try it
coatue•9mo ago
Sweeeet. Let's give it a go!
redwood_•9mo ago
Like the looks of this. Any chance you'll be adding support for SQLite at some point?
ClaireGz•9mo ago
Oh yes! Should be done fairly easily, we have DuckDB coming in the next release, we can also add SQLite. You use SQLite to develop locally I guess?
redwood_•9mo ago
Yes just local, but love to use Nao to quickly analyze datasets
bomewish•9mo ago
Will also give nao a shot as soon as this is shipped. A LOT of non corp data work happens in SQLite and duckdb.
davidwritesbugs•9mo ago
Yes I second this. I use sqlite for local use and also for prototyping data designs, so sqlite support is very useful indeed - not a deal breaker but definitely a tick item.
linsomniac•9mo ago
I can't really tell what those databases are that are coming soon, a "hover" over the icons would be nice. SQLServer coming anytime soon, my coworkers are working on some data integrity work right now and it might be a nice tool for them.
ClaireGz•9mo ago
It's Databricks, Iceberg and Redshift, which on the first survey that we did were the most asked. But as per this post and a broader audience it appears SQLite at least win! We'll add also SQLServer in the list
jinjin2•9mo ago
Do you support Exasol? In the current climate we don’t want to be too dependent on US cloud services, so we are moving our performance sensitive dwh workloads off Snowflake to Exasol.
ClaireGz•9mo ago
Not yet, but we are willing to develop these specific connectors on ask. You can reach out!

Just one question what makes you pick Exasol rather than going with an open-source warehouse tech (e.g. Clickhouse or a lake with Trino)?

jinjin2•9mo ago
We tested those, but none of them could reach the performance we needed, especially under high concurrent load (we have a large number of concurrent workloads). Exasol is just crazy fast.
yevpats•9mo ago
Nice! I think this space is growing. There are a few others Im aware off in the space worth checking out: https://julius.ai/, https://cipher42.ai (I've built the early version of this).
ClaireGz•9mo ago
We heard of Julius a lot, but did not know about Cipher42, there are a few others folks around. We feel there is a pain and also data teams are a bit abandoned at the moment when it comes to work using AI so makes sense. Curious to get a feedback about your journey building cipher, did you stop working on it?
bilalq•9mo ago
Does this only work if I'm writing raw SQL? Can I use this today if my project uses Postgres but has queries written in TypeScript using a query builder like Kysely?
ClaireGz•9mo ago
Yeah at the moment the Tab is made to work the best with raw SQL (either pure SQL files or in a string).

But, if you use the chat/agent you can explain that you're using Kysely and give the warehouse context he will probably handle this.

I did not know Kysely, but from the gif on the project landing it looks like the autocomplete is great? It's different than a tab I agree tho.

jakozaur•9mo ago
The founders showcased a demo at the Data Council conference. Looked cool!
ClaireGz•9mo ago
Glad you liked it!
tucared•9mo ago
Been using this for several weeks now and it's genuinely improved my workflow—I'm choosing it over VSCode and extensions more than half the time.

The chat for exploratory data analysis ("what can you tell me about this column I just added?"), the worksheets and column lineage are real game-changers for dbt development. These features feel purposefully designed for how I actually work.

Claire and Christophe are super responsive to feedback, implementing features and fixes quickly. You can see the product evolving in all the right directions!

ClaireGz•9mo ago
Thanks for your kind message — and for helping us make nao better!
jnnnthnn•9mo ago
This looks awesome! I wish I could connect to my Postgres DB using SSL.
ClaireGz•9mo ago
Thanks for suggesting, we should set this up!
ClaireGz•8mo ago
As an update, we now have SSL connections available
TheTaytay•9mo ago
How much data is hitting your models/prompts? I am okay with you knowing about my schema, but a lot of warehouse data is sensitive data. I saw you have enterprise plans, and maybe that is my answer, but I’d love to know ahead of time if data/results are hitting your servers in addition to the code, or if it’s code-only.
ClaireGz•9mo ago
The content of the data is never going to the models, unless you specifically grant access. Our server only store embeddings of your codebase and data schema. The content of the data is only accessed locally by your computer. When you ask our agent to run a query for you, it will execute it on your warehouse, then ask you for permission to read the result. If you don't allow it, you'll be able to preview it locally without sending it to the LLM. The enterprise version is for your if you want to be sure the prompts and context you send to nao don't go through a public LLM endpoint and get trained on.
paddy_m•9mo ago
This looks like many LLM assisted data projects which help and are flexible, but aren't repeatable and aren't fast enough to be interactive. Nao is a good execution of the concept.

I built Buckaroo as a data table UI for Jupyter and Pandas/Polars, that first lets you look at the data in a modern performant table with histograms, formatting, and summary stats.

Yesterday I released autocleaning for Buckaroo. This looks at data and heuristically chooses cleaning methods with definite code. This is fast (less than 500ms). Multiple cleaning strategies can be cycled through and you can choose the best approach for your data. For the simple problems we shoudn't need to consult an LLM to do the obvious things.

All of this is open source and extensible.

[1] https://youtube.com/shorts/4Jz-Wgf3YDc

[2] https://github.com/paddymul/buckaroo

[3] https://marimo.io/p/@paddy-mullen/buckaroo-auto-cleaning Live WASM notebook that you can play with - no downloads or installs required

ClaireGz•9mo ago
Thanks for sharing. I like the view you built to visualize the profiling of your data, I think that's indeed key to understand your data.
accurrent•9mo ago
Whelps! Ive been building something really similar. Mines nowhere as complete as Buckaroo I really think embedded apps in notebooks can be very useful.
pdepip•9mo ago
Congrats on the HN launch! Really excited to give this a try I think this could be a huge unlock for my team.

One quick issue - unable to connect to my postgres instance that requires SSL.

SSH tunneling seems to be broken as well because when the box is checked I am unable to select a private key path and the connect button is gone

Parsing DB URI would be a helpful feature as well!

Thanks so much, excited to get this up and running when everything is fixed!

pdepip•9mo ago
ignore the SSH tunneling, didn't see that i had to scroll...it's been a long week. regardless, SSL enabled connections would be huge
ClaireGz•9mo ago
Thanks for your feedback, we'll add SSL in the connection soon
ClaireGz•8mo ago
As an update, we now have SSL connections available
dmonay•9mo ago
Any plans to add support for ClickHouse? If so, what does that timeline look like?
ClaireGz•9mo ago
Probably in a few months. For now we're focusing to make the experience great for a restricted number of warehouses. But you can reach out by email and we'll keep you updated
sgt101•9mo ago
Does anyone have any links for more LLM based tools that are aimed at data engineering and data science?
blef•9mo ago
I'm working on a repo listing this, I hope to finish it soon
sgt101•8mo ago
Any link?
mousetree•9mo ago
Hi, this is a great idea. I'm trying to get it working but I'm having some trouble in the last step of "Configure dbt project". I'm looking for a support link on your website but can't find any.
ClaireGz•9mo ago
Here is our contact page, feel free to contact us and we'll hep you setup: https://docs.getnao.io/docs/support/support Also, we'll be adding a new dbt onboarding flow tomorrow!