frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

14 Killed in protests in Nepal over social media ban

https://www.tribuneindia.com/news/world/massive-protests-in-nepal-over-social-media-ban/
221•whatsupdog•2h ago•123 comments

ICEBlock handled my vulnerability report in the worst possible way

https://micahflee.com/iceblock-handled-my-vulnerability-report-in-the-worst-possible-way/
87•FergusArgyll•1h ago•38 comments

RSS Beat Microsoft

https://buttondown.com/blog/rss-vs-ice
73•vidyesh•2h ago•39 comments

Package Managers Are Evil

https://www.gingerbill.org/article/2025/09/08/package-managers-are-evil/
34•gingerBill•1h ago•35 comments

Indiana Jones and the Last Crusade Adventure Prototype Recovered for the C64

https://www.gamesthatwerent.com/2025/09/indiana-jones-and-the-last-crusade-adventure-prototype-re...
22•ibobev•1h ago•1 comments

Using Claude Code to modernize a 25-year-old kernel driver

https://dmitrybrant.com/2025/09/07/using-claude-code-to-modernize-a-25-year-old-kernel-driver
696•dmitrybrant•13h ago•225 comments

VMware's in court again. Customer relationships rarely go this wrong

https://www.theregister.com/2025/09/08/vmware_in_court_opinion/
81•rntn•1h ago•25 comments

The MacBook has a sensor that knows the exact angle of the screen hinge

https://twitter.com/samhenrigold/status/1964428927159382261
871•leephillips•22h ago•423 comments

Why Is Japan Still Investing in Custom Floating Point Accelerators?

https://www.nextplatform.com/2025/09/04/why-is-japan-still-investing-in-custom-floating-point-acc...
130•rbanffy•2d ago•33 comments

Formatting code should be unnecessary

https://maxleiter.com/blog/formatting
240•MaxLeiter•14h ago•325 comments

GPT-5 Thinking in ChatGPT (a.k.a. Research Goblin) is good at search

https://simonwillison.net/2025/Sep/6/research-goblin/
286•simonw•1d ago•222 comments

How inaccurate are Nintendo's official emulators? [video]

https://www.youtube.com/watch?v=oYjYmSniQyM
60•viraptor•2h ago•11 comments

Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations

https://www.guru3d.com/story/intel-arc-pro-b50-gpu-launched-at-for-compact-workstations/
154•qwytw•15h ago•177 comments

Meta suppressed research on child safety, employees say

https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/
11•mdhb•43m ago•0 comments

Look Out for Bugs

https://matklad.github.io/2025/09/04/look-for-bugs.html
31•todsacerdoti•3d ago•19 comments

Creative Technology: The Sound Blaster

https://www.abortretry.fail/p/the-story-of-creative-technology
121•BirAdam•15h ago•73 comments

How many SPARCs is too many SPARCs?

https://thejpster.org.uk/blog/blog-2025-08-20/
36•naves•2d ago•11 comments

Immich – High performance self-hosted photo and video management solution

https://github.com/immich-app/immich
24•rzk•5h ago•5 comments

Writing by manipulating visual representations of stories

https://github.com/m-damien/VisualStoryWriting
5•walterbell•3d ago•3 comments

Analog optical computer for AI inference and combinatorial optimization

https://www.nature.com/articles/s41586-025-09430-z
84•officerk•3d ago•15 comments

How many dimensions is this?

https://lcamtuf.substack.com/p/how-many-dimensions-is-this
92•robin_reala•4d ago•22 comments

No more data centers: Ohio township pushes back against influx of Amazon, others

https://www.usatoday.com
11•ericmay•40m ago•4 comments

Show HN: Veena Chromatic Tuner

https://play.google.com/store/apps/details?id=in.magima.digitaltuner&hl=en_US
41•v15w•7h ago•23 comments

I am giving up on Intel and have bought an AMD Ryzen 9950X3D

https://michael.stapelberg.ch/posts/2025-09-07-bye-intel-hi-amd-9950x3d/
282•secure•1d ago•292 comments

Forty-Four Esolangs: The Art of Esoteric Code

https://spectrum.ieee.org/esoteric-programming-languages-daniel-temkin
62•eso_eso•3d ago•35 comments

Taking Buildkite from a side project to a global company

https://www.valleyofdoubt.com/p/taking-buildkite-from-a-side-project
74•shandsaker_au•15h ago•9 comments

Garmin beats Apple to market with satellite-connected smartwatch

https://www.macrumors.com/2025/09/03/garmin-satellite-smartwatch/
210•mgh2•4d ago•194 comments

How to make metals from Martian dirt

https://www.csiro.au/en/news/All/Articles/2025/August/Metals-out-of-martian-dirt
73•PaulHoule•18h ago•81 comments

No Silver Bullet: Essence and Accidents of Software Engineering (1986) [pdf]

https://www.cs.unc.edu/techreports/86-020.pdf
101•benterix•17h ago•24 comments

What is the origin of the private network address 192.168.*.*? (2009)

https://lists.ding.net/othersite/isoc-internet-history/2009/oct/msg00000.html
212•kreyenborgi•1d ago•83 comments
Open in hackernews

SQLite's File Format

https://www.sqlite.org/fileformat.html
192•whatisabcdefgh•3d ago

Comments

adzm•1d ago
I certainly do appreciate that the file format internals are so well documented here. It really reveals a lot of information about the inner workings of sqlite itself. I highly recommend reading it; I actually saved a copy for a rainy day sometime and it was very insightful and absolutely influenced some design decisions using sqlite in the future.
chasil•22h ago
The format itself is a U.S. federal standard, and cannot be changed. That has advantages and drawbacks.

https://www.sqlite.org/locrsf.html

justin66•21h ago
I assume the SQLite team could increment the version to 4 if they really needed to, and leave the LOC to update (or not) their recommendation, which specifies version 3.
chasil•21h ago
Very true.

However, a significant fraction of the current installed base would not upgrade, requiring new feature development for both versions.

The test harness would also need implementations for both versions.

Then the DO-178B status would would need maintenance for both.

That introduces significant complexity.

johannes1234321•19h ago
Compared to the amount of SQLite database files in the world only few are shared between different applications. If there is an upgrade path most won't notice. The bigger issue imo is API and SQL dialect compatibility.
mockingloris•1d ago
> From the official SQLite Database File Format page.

The maximum size database would be 4294967294 pages at 65536 bytes per page or 281,474,976,579,584 bytes (about 281 terabytes).

Usually SQLite will hit the maximum file size limit of the underlying filesystem or disk hardware long before it hits its own internal size limit.

saghm•1d ago
"Usually"? I'm not saying there are literally no computers in existence that might have this much space on a single filesystem, but...has there ever been a known case of someone hitting this limit with a single SQLite file?
mockingloris•1d ago
Wondered the same thing. That's a lot of data for just one file!

Did a full-day deep dive into SQLite a while back; funny how one tiny database runs the whole world—phones, AI, your fridge, your face... and like, five people keep it alive.

Blows my mind.

webstrand•1d ago
With block level compression you might manage it. But you'd have to be trying for it specifically.
mjevans•23h ago
Never underestimate the ability of an organization to throw money at hardware and use things _far_ past their engineered scale as long as the performance is still good enough to not make critical infrastructure changes that, while necessary, might take real engineering.

Though to be fair to those organizations. It's amazing the performance someone can get out of a quarter million dollars of off the shelf server gear. Just imagine how much RAM and enterprise grade flash that can get someone off of AMD or Intel's highest bin CPU even at that budget!

dahart•23h ago
Poking around for only a minute, the largest SQLite file I could find is 600GB https://www.reddit.com/r/learnpython/comments/1j8wt4l/workin...

The largest filesystems I could find are ~1EB and 700PB at Oak Ridge.

FWIW, I took the ‘usually’ to mean usually the theoretical file size limit on a machine is smaller than theoretical SQLite limit. It doesn’t necessarily imply that anyone’s hit the limit.

wongarsu•23h ago
That's just 10 30TB HDDs. Throw in two more for redundancy and mount them in a single zfs raidz2 (a fancy RAID6). At about $600 per drive that's just $7200. Half that if you go with 28TB refurbished drives (throw in another drive to make up for lost capacity). That is in the realm of lots of people's hobby projects (mostly people who end up on /r/datahoarder). If you aren't into home-built NAS hardware you can even do this with stock Synology or QNAP devices

The limit is more about how much data you want to keep in sqlite before switching to a "proper" DBMS.

Also the limit above is for someone with the foresight that their database will be huge. In practice most sqlite files use the default page size of 4096, or 1024 if you created the file before the 2016 version. That limits your file to 17.6TB or 4.4TB respectively.

mastax•20h ago
Last week I threw together a 840TB system to do a data migration. $1500 used 36-bay 4U, 36 refurbished Exos X28 drives, 3x12 RAIDz2. $15000 all in.
hiatus•11h ago
Where did you source the drives?
dmd•22h ago
> I'm not saying there are literally no computers in existence that might have this much space on a single filesystem

I don't use it for sqlite, but having multi-petabyte filesystems, in 2025, is not rare.

yread•20h ago
The kioxia lc9 is sold with capacities up to 245TB, so we are like 1 year max away from having a single disk with more than 281TB
formerly_proven•19h ago
Seen bigger files on HPC systems. Granted, these were not generated intentionally. But still, they were.
porridgeraisin•1d ago
Related: https://sqlite-internal.pages.dev/

Discussions: https://news.ycombinator.com/item?id=43682006 | 5 months ago | 41 comments

cyanydeez•1d ago
The neatest thing i seen is you can put a sqlite db on a http server and read it effectively using range requests
pmarreck•23h ago
so basically using the http server as a randomly-accessed data store? sounds about right
simlevesque•20h ago
In my experience, this works when the db is read only.

And in these read only cases I'd use Parquet files queried with Duckdb Wasm.

johannes1234321•19h ago
There are implementations for that: For example https://github.com/psanford/sqlite3vfshttp or https://github.com/phiresky/sql.js-httpvfs
ncruces•14h ago
The latency on those requests matters, though.

You'll probably benefit from using the largest possible page size; also, keep alive; etc.

But even then, you'll pull at most 64 KiB per request. If you managed to have response times of 10 ms, you'd be pulling at most 52 Mbps.

So yeah, if your queries end up reading just a couple of pages, it's great. If they require a full table scan, you need some smart prefetching+caching to hide the latency.

SchwKatze•23h ago
Sometimes I ask myself with we could do a better file format, something like parquet but row-oriented
alphazard•23h ago
SQLite is a great example of a single factor mattering more than everything else combined. A database contained in a single file is such a good idea that it outweighs a poorly designed storage layer, poorly designed column formats, and a terrible SQL implementation.

If craftsmanship is measured by the long tail of good choices that give something a polished and pristine feel, then SQLite was built with none of it. And yet, it's by far the best initial choice for every project that needs a database. Most projects will never need to switch to anything more.

pmarreck•23h ago
> If craftsmanship is measured by the long tail of good choices that give something a polished and pristine feel, then SQLite was built with none of it.

It apparently has an extensive and thorough test suite. That's an excellent design choice that tons of other projects could learn from, and is probably a key element of its success.

Sometimes a poorly-designed thing that is excellently-documented and thoroughly-tested is better than a brilliantly-designed thing that is lacking in those. In fact, unless the number of users of the thing is 1 (the creator), the former is likely a better option across all possible use-cases.

Perhaps we could generalize this by stating that determinism > pareto-optimality.

chasil•22h ago
Digital Equipment Corporation sold a SQL database known as Rdb that could also run as a single file.

It was the first database to introduce a cost-based optimizer, and ran under both VMS and Digital UNIX.

Oracle bought it, and VMS versions are still supported.

https://www.oracle.com/database/technologies/related/rdb.htm...

https://en.m.wikipedia.org/wiki/Oracle_Rdb

(My employer is still using the VMS version.)

owyn•19h ago
Oh! RDB was the first database I worked with. I forgot all about it. I do remember refactoring the data layer so that it also worked with Berkeley DB, which is also owned by Oracle now. Or maybe it was the other way around? There was no SQL involved in that particular application so it was just a K/V store. Working with a local data file was the primary design goal, no client/server stuff was even on the radar. SQLite would have been perfect if it had existed.
Jabbles•22h ago
> poorly designed storage layer, poorly designed column formats, and a terrible SQL implementation

Is this opinion shared by others?

chasil•21h ago
Dr. Hipp has said several times that nobody expected a weakly-typed database to achieve the pervasiveness that is observed with SQLite.

At the same time, strict tables address some of the concern of those coming from conventional databases.

Dates and times are a core problem to SQLite not seen elsewhere as far as I know, but this does evade UTC and constantly shifting regional time. My OS gets timezone updates every few months, and avoiding that had foresight.

Default conformance with Postel's Law is SQLite's stance, and it does seem to work with the ANSI standard.

alberth•20h ago
While nobody expected it … it should not be unexpected.

Typically, the Lowest-Common-Denominator wins mass appeal/uasge.

By not having safety checks and even typing enforcement, SQLite caters to actually more use cases than less.

formerly_proven•19h ago
SQLite probably doesn't do anything with times and dates except punting some functions to the limited libc facilities because including any proper date-time facilities would basically double the footprint of SQLite. Same for encodings and collations.
62•8h ago
Same for encodings and collations.
SQLite•18h ago
> Dr. Hipp has said several times that nobody expected a weakly-typed database to achieve the pervasiveness that is observed with SQLite.

I don't remember ever saying that. Rather, see https://sqlite.org/flextypegood.html for detailed explanation of why I think flexible typing ("weak typing" is a purgative and inaccurate label) is a useful and innovative feature, not a limitation or a bug. I am surprised at how successful SQLite has become, but if anything, the flexible typing system is a partial explanation for that success, not a cause of puzzlement.

chasil•17h ago
Did I misinterpret the experts' assertion of imposibility?

"I had this crazy idea that I’m going to build a database engine that does not have a server, that talks directly to disk, and ignores the data types, and if you asked any of the experts of the day, they would say, “That’s impossible. That will never work. That’s a stupid idea.” Fortunately, I didn’t know any experts and so I did it anyway, so this sort of thing happens. I think, maybe, just don’t listen to the experts too much and do what makes sense. Solve your problem."

https://corecursive.com/066-sqlite-with-richard-hipp/

jmull•16h ago
> Did I misinterpret the experts' assertion of imposibility?

Misstated, I'd say. You said "nobody" but the actual quote is about the assumed conventional wisdom of the time, which is quite different. And while this was probably inadvertent, you phrased it in a way that almost made it sound like that was Dr. Hipp's original opinion, which, of course, is the opposite of true.

chrisweekly•18h ago
I often forget or mix up which "Law" refers to which observation, and I'm surely not the only one. So:

Postel's Law, also known as the Robustness Principle, is a guideline in software design that states: "be conservative in what you send, be liberal in what you accept."

da_chicken•21h ago
I think it's one of the reasons DuckDB has seen the popularity that it has.
benjiro•17h ago
DuckDB is a columnar database, and columnar DBs are way better for analytics, statistics... That is its main reason for its popularity, the ability to run specific workloads that row based databases will struggle/be slower at.

Nothing to do with the posters badly formatted complained about Sqlite. By that metric DuckDB has a ton of issues that even out scale Sqlite.

qaq•17h ago
thats a strange argument DuckDB is for OLAP and SQLite is for OLTP
da_chicken•8h ago
Yeah, but most applications are small. So, at the scale of most applications you can drop in DuckDB with zero change in actual performance. It still has indexes to support highly selective queries because it needs to have functional primary keys.
kevin_thibedeau•22h ago
It was designed to be a DB for Tcl at a time when that language didn't have typed objects. Its SQL implementation reflects that. Where are the grand Python, or Perl, or JS DBs?
codesnik•18h ago
I've never used it, but perl contains support for Berkeley DB in stdlib since forever. But sqlite maps to perl just fine.
miohtama•18h ago
ZODB https://zodb.org/en/latest/
skissane•14h ago
It actually does have typed values, it is just the schema didn’t constrain the value types stored in each column, until relatively recently the column type was mostly just documentation. However, now it has STRICT tables which do constrain the value types of columns. And for a lot longer you’ve been able to implement the same thing manually using check constraints-which is a bit verbose if you are writing the schema by hand, much less of a problem if it is being generated out of ORM model classes/etc
degamad•11h ago
>> It was designed to be a DB for Tcl at a time when that language didn't have typed objects. Its SQL implementation reflects that.

> It actually does have typed values

Now. As the article points out, they were not part of the initial design, because of the Tcl heritage.

skissane•7h ago
AFAIK it has always had typed values. Don’t confuse column types (which constrain a column to containing only values of a specified type) with value types (which enable it to treat the string “12” and the integer 12 and the floating point 12.0 as three distinct values)

Tcl has value types. Tcl 7.x and earlier only had one data type, the string-so adding two integers required two string-to-int conversions followed by an int-to-string conversion. In 1997, Tcl 8.x was released, which internally has distinct values types (int, string, etc), although it retains the outward appearance of “everything-is-a-string” for backward compatibility. So SQLite’s Tcl heritage included distinguishing different types of values, as is done in post-1997 Tcl.

christophilus•20h ago
Firebird also fits the bill, I think, but never took off. Firebird even supports client-server deployments.
sethev•18h ago
This seems like an unnecessarily negative comment. I've been a user of SQLite for over 20 years now (time flies!), what you're calling lack of polish, I would chalk up to Dr. Hipp has been consciousness about maintaining compatibility over the long term. So much so, that the Library of Congress recommends it for long-term preservation of data.

Long term compatibility (i.e. prioritizing the needs of users vs chasing inevitably changing ideas about what feels polished or pristine), near fanatical dedication to testing and quality, and sustained improvement over decades - these are the actual signs of true craftsmanship in an engineering project.

(plus, I don't agree with you that the storage layer, column format, or SQL implementation are bad).

alphazard•14h ago
> I would chalk up to Dr. Hipp has been consciousness about maintaining compatibility over the long term.

I agree. I am not suggesting that the SQLite team doesn't know how to make the technology better. Just that they aren't/haven't. Backwards compatibility is a good reason not to.

My original comment was contrasting craftsmanship and utility, since both are somewhat prized on HN, but they aren't the same thing at all. Look at a system like Wireguard. A huge amount of small decisions went into making that as simple and secure as it is. When most developers are confronted with similar decisions, they perform almost randomly and accumulate complexity over the long tail of decisions (it doesn't matter just pick a way). With Wireguard, every design decision reliably drove toward simplicity (it does matter, choose carefully).

jmull•14h ago
I don't think they ever hesitate to make sqlite better. It's just that they have a different definition of "better" than you.
ttz•10h ago
> contrasting craftsmanship and utility, since both are somewhat prized on HN

I'd say they're prized everywhere, though "craftsmanship" is really subjective. and the HN I usually [edit/add: see] seems to have more a meta of "criticize anything someone tries to build, and rave about IQ" tbh ;)

SQLite works and I don't have to think about it why it works (too much). That is IMO a true hallmark of solid engineering.

AlexClickHouse•17h ago
Exactly as in MS Access, Interbase/Firebird, and dBase II.
crazygringo•14h ago
> outweighs a poorly designed storage layer, poorly designed column formats, and a terrible SQL implementation

You're going to have to expand on that, because I have no idea what you're talking about, nor does anyone else here seem to.

It's a relational database meant primarily for a single user. It's SQL. It works as expected. It's performant. It's astonishingly reliable.

The only obviously questionable design decision I'm aware of is for columns to be able to mix types, but that's more "differently designed" rather than "poorly designed", and it's actually fantastic for automatically saving space on small integers. And maybe the fact ALTER TABLE is limited, but there are workarounds and it's not like you'll be doing that much in production anyways.

What are your specific problems with it?

pluto_modadic•7h ago
I think they do a good job with test coverage, compatibility, and sustainable support. Can't say that about most every other hype database made by a fortune 500 and shut down 3 years later.
lisper•22h ago
> The database page size in bytes. Must be a power of two between 512 and 32768 inclusive, or the value 1 representing a page size of 65536.

What an odd design choice. Why not just have the value be the base 2 logarithm of the page size, i.e. a value between 9 and 16?

kevincox•22h ago
If I had to guess this field was specified before page sizes of 65536 were supported. And at that point using the value 1 for page sizes of 65536 made the most sense.
Retr0id•21h ago
There exists hardware with non-power-of-two disk sector sizes. Although sqlite's implementation requires powers-of-two today, a future implementation could conceivably not. Representing 64k was presumably an afterthought.

https://eki.moe/posts/using-520-byte-sector-disks/

SQLite•18h ago
> Why not just have the value be the base 2 logarithm of the page size, i.e. a value between 9 and 16?

Yes, that would have been a better choice. Originally, the file format only supported page sizes between 512 and 32768, though, and so it just seemed natural to stuff the actual number into a 2-byte integer. The 65536 page size capability was added years later (at the request of a client) and so I had to implement the 65536 page size in a backwards compatible way. The design is not ideal for human readability, but there are no performance issues nor unreasonable code complications.

The page size value is not the only oddity. There other details in the file format that could have been done better. But with trillions of databases in circulation, it seems best to leave these minor quirks as they are rather than to try to create a new, more perfect, but also incompatible format.

kayson•22h ago
Any recommendations from HN for a write-once (literally once), data storage format that's suitable for network storage?

sqlite docs recommend avoiding using it on network storage, though from what I can gather, it's less of an issue if you're truly only doing reads (meaning I could create it locally and then copy it to network storage). Apache Parquet seems promising, and it seems to support indexing now which is an important requirement.

mcculley•22h ago
SQLite works fine over read-only NFS, in my experience. Just only work on an immutable copy and restart your application if ever changing it. If your application is short lived and can only ever see an immutable copy on the path, then it is a great solution.
nasretdinov•21h ago
SQLite does work on NFS even in read-write scenario. Discovered by accident, but my statement still holds. The WAL mode is explicitly not supported over network filesystems, but I guess you don't expect it to :)
kayson•20h ago
My experience has been the opposite... Lots of db lock and corruption issues. The FAQ doesn't call out WAL specifically, just says don't do it at all: https://www.sqlite.org/faq.html#q5
kirici•19h ago
I've had multiple flaky issues with SQLite (e.g. non-HA Grafana) on Azure Files using NFS v4.1 leading to locked DBs. Perhaps some implementations work, I'm not gonna rely on it or advise others to do so.
nasretdinov•18h ago
Yeah trying to write from several hosts will certainly fail if you don't have advisory locks working, which is not a given, so you are right of course
simlevesque•20h ago
Parquet files are what I use.
pstuart•19h ago
Multiple writers on network storage is the issue. Reading should be totally fine.
usefulcat•8h ago
Ordinary files inside squashfs?

https://www.kernel.org/doc/html/latest/filesystems/squashfs....

heavyset_go•4h ago
SQLite over NFS works if you have one writer and many readers.
Dwedit•20h ago
My only question is if you really need a prefix before every value to say what type it is.
lawrencejgd•17h ago
Any field in SQLite can contain any type, even if the schema says that a field should be INTEGER, it could have a TEXT, so it's necessary to specify what's the type of every single value
mrtimo•17h ago
It’s 2025. Let’s separate storage from processing. SQLite showed how elegant embedded databases can be, but the real win is formats like Parquet: boring, durable storage you can read with any engine. Storage stays simple, compute stays swappable. That’s the future.
re•16h ago
Counterpoint: "The two versions of Parquet" https://news.ycombinator.com/item?id=44970769 (17 days ago, 50 comments)
codedokode•16h ago
As I understood by reading the short description, Parquet is a column-oriented format which is made for selecting data and which is difficult to use for updating (like Yandex Clickhouse).
relium•14h ago
The one issue I have with SQLite's file format is that if part of the file gets corrupted, you can't easily recover the rest of the file. I asked Richard Hipp about this many years ago and he said that fixing the problem would unfortunately break binary compatibility.
rkagerer•13h ago
The fact this fits in a few pages and is so approachable is a testament to its simplicity. I think I'd find it a lot harder to grok the file format of, for example, a Word doc/docx file.
mdaniel•12h ago
I wouldn't put .doc and .docx next to one another, as they're only tangentially related. I'd bet getting the <html><body><p>hello, world</p></body></html> of .docx would be some silliness, but would not be hard to grok. I couldn't readily find a browsable copy of ECMA 376 4th Ed online but https://github.com/PumasAI/WriteDocx.jl/blob/v1.2.0/docs/src... was in the ballpark of what I expected to find in some section of the actual spec