The issues I saw seem to be related to these issues:
https://github.com/launchbadge/sqlx/issues/3080
https://github.com/launchbadge/sqlx/issues/2510
The problems did not manifest until the application was under load with multiple concurrent sessions.
Troubleshooting the issue by changing the connection pool parameters did not seem to help.
I ended up refactoring the application's data layer to use a NoSQL approach to work around the issue.
I really like the idea of SQLx and appreciate the efforts of the SQLx developers, but I would advise caution if you plan to use SQLx with SQLite.
I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.
It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)
I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.
I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).
In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.
I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.
Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)
The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).
2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.
If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.
I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)
Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.
For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).
That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).
If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.
Let's compare: SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want
SQLx - env: DATABASE_URL
Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.
I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.
And of course now that I have it, the incremental cost of adding a new query is really low as well
I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.
The configuration file is centralized as well, so you don't have to worry about it.
I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.
What I like about SQLC is that it can be completely de-coupled from your Go code.
I referred go-jet since it introspects the database for it's code generation instead.
I've been quite happy with this setup!
[1] https://github.com/launchbadge/sqlx/blob/main/FAQ.md#how-do-...
I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.
It's got a nice little migration system too with sqlx-cli which is solid.
* Or in your editor as you're writing code.
``` It is required to mark left-joined columns in the query as nullable, otherwise SQLx expects them to not be null even though it is a left join. For more information, see the link below: https://github.com/launchbadge/sqlx/issues/367#issuecomment-... ```
Did you have other problems beyond this, or are you referring to something different?
The issue above is a bit annoying but not enough that I'd switch to an ORM over it. I think SQLx overall is great.
int year = 2019;
. . .
for(Film film: "[.sql/] select * from film where release_year > :rel_year".fetch(year)) {
out.println(film.title);
}
1. https://github.com/manifold-systems/manifold/blob/master/man...I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc
How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.
* [1] https://kysely.dev
How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:
...
WHERE organization = ?
But what if you have multiple possible where-conditions, let's say
"WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?",
and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case. WHERE organization = $1
AND ($2 IS NULL OR starts_with(first_name, $2)
AND ($3 IS NULL OR birth_date > $3)
With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.Does that make sense?
* [1] https://sql-page.com/
tmpfs•3d ago
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/
vegizombie•3d ago
For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.
stmw•2d ago
belak•13h ago
[1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...
0xCMP•13h ago
Versus Python and Node often needing to properly link with the system they'll actually be running in.
no_circuit•12h ago
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
[1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...
[2] https://crates.io/crates/rusqlite
[3] https://www.sea-ql.org/SeaORM/
[4] https://github.com/diesel-rs/diesel/issues/2078
[5] https://github.com/diesel-rs/diesel/issues/1728
[6] https://github.com/diesel-rs/diesel/discussions/4160
[7] https://en.wikipedia.org/wiki/Database_normalization
adelmotsjr•11h ago
TrueDuality•9h ago
Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.