I've played with SQLite when it was still available in-browser, and I felt that was on the brink of being a game-changer. If it was still supported in-browser and we had replication from the browser, peer-to-peer, I think we'd be living in a much more useful world. It's a lovely tech, but I never built anything serious around it. At this point, as a front-end web technology that seems to be gone. I know I could conceivably use it to back a NodeJS server, keep all the data in memory and local files, but I don't see a great use case for that. I do lots of small projects that could use SQLite, but I usually scaffold them around a single shot Mysql DB for testing, which is easy to spin up and easy to detach from any given back-end instance. So I'm not sure what I'd gain by trying to make a tiny databse on the back-end live in Sqlite files. I'm totally enchanted by stuff like Litestream, and I'm actually dying to find a reason to try it. But every good use case for Sqlite that I could think of sort of died when it stopped being a viable client-side store.
TL;DR, what are people using SQLite for? What's the advantage over spinning up a tiny MySQL instance in a cloud somewhere, where you don't have to deal with managing replication and failover by yourself?
Think of apps Spotify, WhatsApp, AirBnB, Uber, etc. Not to mention mail clients, web browsers, etc. Probably 90% of non-web clients are using SQLite.
For that portion (the locally-run mobile backend - the middleware) I guess it would make more sense... so I see what you're saying.
[Edit: Of all 4 things - Maybe only Spotify is actually an Electron app...? Although I'm confused as to how the rest could leverage NodeJS locally]
https://docs.python.org/3/library/sqlite3.html
https://www.sqlite.org/cintro.html
https://docs.rs/sqlite/latest/sqlite/
etc :)
Having a consistency of SQL everywhere is really appealing for data management.
https://docs.python.org/3/library/sqlite3.html
The built in library makes it really quick and easy to use it. Whereas mysql or in my case id use postgresql if i needed a full db. You're looking for a third party library? I have used Psycopg before but its just not needed.
Yes, ive come up against the sqlite locked database performance troubles; and failed to actually get the multi user thing working properly. But I moreso just needed to reapproach the issue.
My new startup http://mapleintel.ca is db.sqlite3 based. thousands of lines in it so far and growing every day.
> searchcode.com’s SQLite database is probably one of the largest in the world, at least for a public facing website. It’s actual size is 6.4 TB.
https://www.sqlite.org/testing.html
To give you an idea of just how hardcore this is, they stress test something as fundamental as malloc() independently:
>SQLite, like all SQL database engines, makes extensive use of malloc() [...] On servers and workstations, malloc() never fails in practice and so correct handling of out-of-memory (OOM) errors is not particularly important. But on embedded devices, OOM errors are frighteningly common and since SQLite is frequently used on embedded devices, it is important that SQLite be able to gracefully handle OOM errors.
>OOM testing is accomplished by simulating OOM errors. SQLite allows an application to substitute an alternative malloc() implementation using the sqlite3_config(SQLITE_CONFIG_MALLOC,...) interface. The TCL and TH3 test harnesses are both capable of inserting a modified version of malloc() that can be rigged to fail after a certain number of allocations. These instrumented mallocs can be set to fail only once and then start working again, or to continue failing after the first failure. OOM tests are done in a loop. On the first iteration of the loop, the instrumented malloc is rigged to fail on the first allocation. Then some SQLite operation is carried out and checks are done to make sure SQLite handled the OOM error correctly. Then the time-to-failure counter on the instrumented malloc is increased by one and the test is repeated. The loop continues until the entire operation runs to completion without ever encountering a simulated OOM failure. Tests like this are run twice, once with the instrumented malloc set to fail only once, and again with the instrumented malloc set to fail continuously after the first failure.
I don't say this as a hater of MySQL! SQLite is built with very different constraints in mind. But data consistency is something it really shines at.
1. There's almost certainly a port of Sqlite3 to WASM that would be more than glad to run in your browser.
2. I'd really love to know what applications fit in the "we had replication from the browser, peer-to-peer, I think we'd be living in a much more useful world" situation. We've had GunDB, IPFS, etc. that live in the browser for decades (and projects like Urbit), and the killer app just... doesn't seem to exist? Let alone anything useful as just a basic demo? Anyone have anything to point to? I just don't see it, personally.
There are probably a lot of hub-and-spoke systems like this flying way under the radar that would be a lot better if there were a reliable technology to keep them synchronized. I keep looking at Litestream and thinking about it.
Personally I use it a bunch in mobile and desktop apps.
I feel like a JSON file would be more compact and easier to read, but wtf do I know. Harder to query, I guess?
> All supported versions of Windows support SQLite, so your app does not have to package SQLite libraries. Instead, your app can use the version of SQLite that comes installed with Windows.
https://learn.microsoft.com/en-us/windows/apps/develop/data-...
Managing profiles and inventory in a solo game where crafting results are random and I don't like limited inventories.
you're doing something wrong if that is easier than using sqlite
> What's the advantage over spinning up a tiny MySQL instance in a cloud somewhere
one advantage is your thing will work without needing network access
How does a "fork" like this be tested if everything stays working and compatible to upstream after the change?
There is a separate TH3 test suite which is proprietary. It generates C code of the tests so you can run the testing in embedded and similar environments, as well as coverage of more obscure test cases.
I've never understood why other large open-source projects are just willing to accept contributions from anyone. What's the plan when someone copy-pastes code from some proprietary codebase and the rights holders finds it?
> SQLite is used by thousands of software projects, just three being Google's Android, Mozilla's Firefox and Apple's iOS which between them have billions of users. That is a main reason why SQLite is so careful and conservative with all changes.
That's a great perspective. How well does the SQLite team work with them? How well does it work in production, especially if you need SQLite compatibility? And
=>
"LumoSQL can swap SQLite backend, with Key-value store engines"
===
"LMDB is the most famous (but not the only) example of an alternative key-value store"
=>
"We currently only support LMDB as an alternative KV store"
===
"and LumoSQL can combine dozes of versions of LMDB and SQLite source code like this:"
=>
"LumoSQL will allow you to use different versions of SQLite and LMDB in parallel as different backends"
you are suggesting, i misunderstood the original text , if that is true i blame the original of being obfuscated
> LumoSQL is a derivative of SQLite, the most-used software in the world. Our focus is on privacy, at-rest encryption, reproducibility and the things needed to support these goals. [...]
https://lumosql.org/src/lumosql/file?name=doc/project-announ...
I'm unsure what Phase 1 was about, or if there is a planned Phase 3, but seems to outline what they're currently aiming for at least.
Oh, license was changed to Apache 2.0! But still github account has note which equals Hitler to Soros...
Why should SQLite backend be replaced with LMDB?
UPD: Ooops, LMDB was forked a long time ago, so, maybe, LMDB can be fixed already!
Lio•9h ago
1. https://lumosql.org/src/not-forking/doc/trunk/README.md
actionfromafar•9h ago
sksrbWgbfK•9h ago
Made by the same people who brought us SQLite.
stonemetal12•6h ago
aredox•9h ago
We really need a way to customise software at the source code level without forking.
Arwill•6h ago
LoganDark•6h ago
It's not really possible to implement in the same way for many other languages, but something like this but for source code transformations (rather than bytecode, or machine code for compiled languages) is probably the kind of thing they're thinking of.
Mixin allows you to insert code into methods, modify calls to functions, read/write to local variables, modify constants, and a lot more in that type of vein. It is the way mods are made in the Fabric mod loader for Minecraft. I believe Forge also reluctantly added support for Mixin back in 1.16 or so.
eddd-ddde•7h ago
How do you handle changing upstream files locally without forking? Do you just, keep changes in a separate configuration format that is applied lazily at built time?
I've never had issues with maintaining a fork anyways.
90s_dev•6h ago
alexjurkiewicz•6h ago
But if you give this a cool name, it's a New Idea.
worldsayshi•4h ago
SOLAR_FIELDS•37m ago
SoftTalker•33m ago
knowitnone•4h ago
dunham•3h ago
aidenn0•2h ago
e63f67dd-065b•2h ago
- A set of sqlite patches,
- Other upstreams and patches?
- A custom toolchain to build all the above together into one artefact
SOLAR_FIELDS•50m ago
Seems like they’re just ditching the inbuilt tools git/github offers to achieve this and doing the exact same thing with custom bespoke tooling. Instead of doing that, I’d be more interested in leveraging what Git has baked in with some sort of wrapper tool to perform my automation with some bespoke algorithm. There are merge drivers and strategies end users can implement themselves for this kind of power user behavior that don’t require some weird reinvention of concepts already built into Git