frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
250•theblazehen•2d ago•84 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
23•AlexeyBrin•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
705•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
967•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
66•jesperordrup•6h ago•28 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•43m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
42•speckx•4d ago•34 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•6 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
237•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
237•dmpetrov•16h ago•126 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•247 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
389•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
303•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
23•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
25•1vuio0pswjnm7•2h ago•14 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
270•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•461 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
305•surprisetalk•3d ago•44 comments
Open in hackernews

Nullable but not null

https://efe.me/posts/nullable-but-not-null/
65•efeoge•6mo ago

Comments

lblume•6mo ago
> But a field that is nullable in the schema and never null in practice is a silent lie.

This seems to be the central claim. But just as lies require intent, so does database design to some degree.

A column that is nullable but never null does not conclusively say anything, really. That's like saying a birthday column that is never before 1970 in the current data should be restricted to years after that date.

A nullable column signals that the data may be left empty, which is entirely different from that column actually having to be empty with the current data. Is-ought distinction, the potentiality of null ("nullability") is not to be confused with the actuality of current vacancy ("null values"). The programmer extracting data from the database must check for null if the data is nullable, not just if is null now as a principle of robust fault-tolerant code.

matsemann•6mo ago
Problem is you end up other places with the assumption thar it's never null. So in the future when you actually set it to null somewhere it will blow up.
tuyiown•6mo ago
People that assume that a (nullable) value is never null because the sample at hand does not contain a null value _will_ learn their lesson the hard way, the one you describe.

But even non-nullable does not always resist to time, I'd argue that use cases where the field _has_ to be null eventually emerges and somehow have to be mitigated. There is no easy solution to safely workaround that without either tons work that duplicates lots of things or taking risks by adapting the code base.

darkwater•6mo ago
I think their point is that for all intents, the column IS not nullable. It's nullable as an artifact of making live schema migration easier, with no blocking/downtime. But as the data model is concerned, it should not be nullable.
lblume•6mo ago
Sure, if one just leaves a column nullable due to negligence, one should check the actual requirements and enforce them to make invalid states unrepresentable. The author still makes too strong of a claim that becomes detached from the migration aspect, insinuating that one can just empirically test the database to check whether this is the case, to which I disagree.
jerf•6mo ago
I don't think the author is talking generally about fields that could be NULL but just happen to never be so in the production DB. The piece is specifically in the context of a new database field that is fully intended by its designer to be NOT NULL, which was NULL only for migration purposes, and which was never updated to be NOT NULL once the migration is complete. The point was not meant to be extended beyond that.

One could write a separate piece about maybe using that as a clue that the field could be NOT NULL'd in the future but that's not what this post is.

lblume•6mo ago
In that case we read it entirely differently — the last paragraph explicitly mentions inferring "Nullable fields that never hold nulls are often the result of incomplete migrations or forgotten cleanup steps". The author also proposes a script "to identify cases where the field could safely be changed to non-nullable". But alas, we cannot do that with full generality, due to there being a big difference in intent that cannot be captured purely by counting nulls, and surely not by only calculating percentages.
dataflow•6mo ago
> That's like saying a birthday column that is never before 1970 in the current data should be restricted to years after that date.

No it's not, because they specifically started with the premise that the field was initially intentionally non-null and was only temporarily made nullable for migration purposes. That is obviously not the situation you are describing, right?

codingdave•6mo ago
How would the database know whether the other app layers depend on that value or not? You could absolutely have an app that does not require data in a specific field to function, yet all records happen to have data. This is actually fairly common in single-tenant apps, where some tenants populate a field and others do not. You need to look at how the data is used across the entire stack to know whether or not it should be nullable, not whatever the current data happens to be.
cerved•6mo ago
It doesn't. That's why it's the responsibility of the application layer to correctly implement the data model of the database and not the other way around
Lvl999Noob•6mo ago
The script example in TFA is just a starting point. I believe you would still manually go through all the columns it finds and decide which ones are actually supposed to be nullable and which ones don't need to be. As the article said, nullable fields that don't contain null could be a sign of incomplete migrations and such.
danbruc•6mo ago
A column that is nullable but never null might indicate that it should be non-nullable but does not necessarily imply so. Say you have an optional comment field, it might just happen by accident that a comment was recorded for each row, but that of course becomes increasingly unlikely with each additional row in the table. There is probably no harm in checking your database for such columns, especially in tables with many rows, but in the end it is up to the semantics of the data model whether a column should be nullable or not. The absence of NULLs is an indicator but not a determiner.
Szpadel•6mo ago
there is also other scenario, field might obviously looks like it should have value, and in check it might event always have it, but it might be "lazy" value.

eg. you might have some bug CSV uploaded and your have number of rows in it, your app could insert record without this number and async process would fill that later.

there might be even some corner case where null value is possible

I believe solution here isn't to check what fields do not use null, but to improve process of creating such migration. either you should create second ticket for next release to update db or commit new migration to some kind of next branch.

marcosdumay•6mo ago
> There is probably no harm in checking your database for such columns

The harm is the same as any other unreliable linter rule.

Each one such rule is almost harmless. And on most places that use that kind of rule, they are extremely harmful.

danbruc•6mo ago
With that I agree, once you start to treat the output of any linter as the truth, you are actively lowering the quality of your codebase. At the very least if it has reasonably good quality, if it is in bad shape, blindly following linters might still increase code quality, you will just not reach the peak.
mdavid626•6mo ago
I completely agree. Unfortunately, this is one of those things that’s hard to convince people of. You often hear: ‘It doesn’t matter,’ ‘It works fine without it,’ or ‘Why overcomplicate things?’—but that mindset can be limiting.
deepsun•6mo ago
I've seen worse. Some teams use JSON for their data. Not only each field can be missing (aka NULL), it can also be "null". Or a different type.

I envy your team who's only mistake is to forget setting NULLABLE. Rainbows and unicorns ;)

thomas-st•6mo ago
That's just the JSON equivalent of "we have data, and it's null" vs "data is missing", and consistency could be enforced using a constraint or by making it non-NULL in the first place.

It's more common in string fields, which in many cases just get rendered on a web form that doesn't differentiate blank (empty string) from null state, therefore in the database we should in most cases set it up as follows:

- Value required: field should be non-NULL + at least length 1 (via check constraint)

- Value optional: either field is non-NULL, or field is nullable + at least length 1

I'm curious if you prefer to store optional strings as non-NULL and rely on the length, or as nullable and have a length constraint.

lock1•6mo ago
What if almost everything is NULLABLE? including the supposedly primary key and foreign keys of the table?

I've had the firsthand experience building a consumer for that kind of DB and it's hell to get anything running correctly without literally writing a dedicated layer to sanity check everything.

phplovesong•6mo ago
Sounds like a bad domain, and/or poor design.
gopher_space•6mo ago
> without literally writing a dedicated layer to sanity check everything

It's not attractive to developers for a variety of reasons, but encapsulation is generally cheaper (in every sense) than the alternatives by orders of magnitude. If a system is hard to alter and other people rely on it to do their jobs, thinking about it as set in stone during my planning process will save me grief and heartache.

Starting with an independent client in mind makes it easier to plan around things like reconstructing tables from composed exports or dealing with authoritatively incorrect data. It's a good collection point for hard-coded edge case handling, and a great location for all of your discovery notes.

whatevaa•6mo ago
You end up with tons of useless error handling code which can only "halt and catch fire" because there is no recovery from a lot of those cases.
antif•6mo ago
No shame on JSON.. XML can do this too!
toledocavani•6mo ago
Worked in an enterprise project where they use a BPMN engine for all business logic (designed by non-technical people).

Each process can either have predefined fields (take time to implement) or key-string value store (yes, they chose this).

Either the BPMN logic or some JSs (some, as not all people can code) in PBMN nodes or some Java hooks (a lot, wrote under those other people requests) edit those values.

So when something's wrong, we swam in the sea of key-value, search all the code for who update this field, and why it's value is not JSON nor null nor boolean but a string "Not used, waiting for policy 1234".

After that project, I'm happy every time I can use a DB to store data, more so if it can run in local, much more so if it's relational.

Trigg3r•6mo ago
Am I missing something here in my (MS) SQL world? if a new field is added as null, I do that to the (now) 20 year old system to we don't break 100's of stored procs - any (new) code that needs that field, has to check for it being null...
minkeymaniac•6mo ago
in sql server, you can simply add a not null check constraint with nocheck (see my comment with full code)

ALTER TABLE foo WITH NOCHECK ADD CONSTRAINT CheckNotnull CHECK (id IS NOT NULL)

any new values coming in cannot be null but the values already in the table with null are fine... then you can update them to not null over time

stux•6mo ago
This is interesting! A field being nullable because it's legitimately optional in the domain model is one thing, but for new fields which shouldn't be nullable in the domain model, unless you can pick a reasonable identity value, you need a concept of absence that's different from null. Luckily the intersection of "non-nullable fields" and "fields with no reasonable identity value" and "fields which didnt exist in v1 of the domain model" is normally pretty small, but it's painful when it happens.

This reminds me of frozen/nonfrozen enums in Swift. You can do exhaustive case analysis on frozen enums, but case analysis on nonfrozen enums requires adding an `@unknown default` case.

https://docs.swift.org/swift-book/documentation/the-swift-pr...

nlitened•6mo ago
> you need a concept of absence that's different from null

Could you give an example? Null _is_ absence, the way I see it

Lvl999Noob•6mo ago
What they are saying is that the field is always present in the domain model but we don't have the information to backfill it. For example, say you have a customers table. Originally, it just stored their name and internal ID. But now you are adding in their government ID as well. Except that you already have thousands of customers and you don't have their government ID. So you either make the column nullable and slowly backfill it over time. Or you find some default value which isn't null but the code understands it to still be empty. And again, you slowly backfill over time.
nlitened•6mo ago
> So you either make the column nullable and slowly backfill it over time. Or you find some default value which isn't null but the code understands it to still be empty. And again, you slowly backfill over time.

What would be the justification for using some other “default value” in this case? That’s just null with extra steps. Null _is_ the default value

There’s nothing gross or unholy about null values. Very smart people many years ago envisioned them as part of relational databases specifically for the use cases like these.

mort96•6mo ago
I want to represent that a field should never get new null values, it should be treated as non-nullable for the purpose of writing; however there are historical records which were made before the introduction of the field, so it should be treated as nullable for the purpose of reading.
nlitened•6mo ago
> I want to represent that a field should never get new null values, it should be treated as non-nullable for the purpose of writing

That's a job of DB triggers — arbitrary restrictions that are automatically applied when you create or update records. You can restrict new null values, you can restrict numeric values to be odd or prime, you can make sure that only a single field can be null out of two, but not two at the same time — all the things that you can't even describe in a type system.

whatevaa•6mo ago
If null is an actual value, you need another value to represent "undefined".

Null is also considered a billion dollar mistake by very smart people.

bazoom42•6mo ago
If you are referring to Tony Hoares talk, it is not nulls which is the billion dollar mistake. The mistake is type systems which does not distinguish between nullable and non-nullable.
PeterZaitsev•6mo ago
Should not we look for database to be able to do online, efficient non locking addition of column with any default value, not just NULL rather than application to have a complicated and fragile logic ?
avg_dev•6mo ago
I believe PostgreSQL does this since v11, which was released in 2018: (current is v17)

> Many other useful performance improvements, including the ability to avoid a table rewrite for ALTER TABLE ... ADD COLUMN with a non-null column default

https://www.postgresql.org/docs/release/11.0/

I think there is some restriction there, like the default can't be "volatile" - I can't remember the precise definition here but I think current_timestamp would be volatile, but any static value would not.

tudorg•6mo ago
That is correct, for non-volatile default values Postgres is quick, which means that it is generally a safe operation.

Also interesting, `now()` is non-volatile because it's defined as "start of the transaction". So if you add a column with `DEFAULT now()` all rows will get the same value. But `timeofday()` is not volatile, so `DEFAULT timeofday()` is going to lock the table for a long time. A bit of a subtle gotcha.

avg_dev•6mo ago
Thanks for the info. One minor point:

> But `timeofday()` is not volatile, so `DEFAULT timeofday()` is going to lock the table for a long time.

Perhaps the “not” was a typo?

tudorg•6mo ago
Ah yes, it was a typo, sorry.
HocusLocus•6mo ago
I'm glad to see people discussing [zero,infinitesimals,false,emptystring,isnull,unpopulated,missing] as if each one is a thing. They've always been things! We've just been buried in compromises and shortcuts all these years.

There should also be a [the-asteroid-has-hit-y'all-are-so-stupid] and global data systems should just just pass that around after impact until the power goes out for good.

comrade1234•6mo ago
I've never seen step 5 happen...
tudorg•6mo ago
For a tooling solution for this problem, and many others, pgroll (https://github.com/xataio/pgroll) automates the steps from the blog post in a single higher-level operation. It can do things like adding a hidden column, backfill it with data, then adds the constraint, and only then expose it in the new schema.
jonny_eh•6mo ago
I recommend using checklists for schema changes like this.
minkeymaniac•6mo ago
If you don't care for old data having null , you could add a check contraint with nocheck (this is sql server fwiw)

for example

create table foo(id int) insert foo values (1), (2), (3)

insert foo values (null)

select * from foo

id

1

2

3

NULL

ALTER TABLE foo with nocheck ADD CONSTRAINT CheckNotnull check (id IS NOT NULL)

insert foo values (null)

Msg 547, Level 16, State 0, Line 13 The INSERT statement conflicted with the CHECK constraint "CheckNotnull". The conflict occurred in database tempdb", table "dbo.foo", column 'id'. The statement has been terminated.

However be aware that if you update an existing value to NULL, you will still get the error

update foo set id = null where id = 2

Msg 547, Level 16, State 0, Line 20 The UPDATE statement conflicted with the CHECK constraint "CheckNotnull". The conflict occurred in database "tempdb", table "dbo.foo", column 'id'.

nlitened•6mo ago
From my experience, any new field that gets added to existing tables (after a non-trivial system got deployed to production environment with actual clients) must always be nullable.

Firstly, it’s more practical and brings fewer surprises to all people involved (especially if there are many people involved).

Secondly, if we’re being pedantic, it is a mere acknowledgement of the fact that the field was not there before and things worked without it, and now the field is here, so in all senses the field _is actually_ optional, so all the touching code should know how to handle this.