I'm a maintainer of Valkey GLIDE (the official Valkey client) and work on the AWS ElastiCache team. Day job is deep in database client internals — which gives me a clear view of where Node.js job queues hit a ceiling at scale.
The problems with queues like BullMQ aren't subtle: 3+ round-trips per operation, Lua EVAL scripts that NOSCRIPT-error on restart, Lists/BRPOPLPUSH primitives that predate Streams, and complete blindness to cloud topology (cross-AZ bills sneak up on you).
So I built Glide-MQ. High-performance job queue for Node.js built on Valkey/Redis OSS Streams, powered by Valkey GLIDE — a Rust core exposed via native NAPI bindings.
Because I maintain the underlying client, I could go further than an external library ever could:
1-RTT per job — completeAndFetchNext folds completion + next fetch + activation into a single FCALL. No more chatty round-trips.
Server Functions over EVAL — one FUNCTION LOAD, persistent across restarts. NOSCRIPT errors gone entirely.
Streams + Consumer Groups from day one — replaced Lists/BRPOPLPUSH. The PEL gives at-least-once delivery with fewer moving parts.
Cluster-native, not cluster-bolted — hash-tagged keys from day one. No {braces} surprise when you scale out.
AZ-Affinity routing — reads go to same-AZ replicas. Up to 75% reduction in cross-AZ costs on ElastiCache.
Batch API pipelining — addBulk for 1,000 jobs: 228ms to 18ms (12.7x) via GLIDE's non-atomic pipeline.
IAM auth — native ElastiCache/MemoryDB auth with auto token refresh. No secrets in env vars.
Numbers (no-op processor, Valkey 8.0, single node): - c=1: 4,376 jobs/s | c=10: 15,504 jobs/s | c=50: 48,077 jobs/s - 15KB payload compressed to 331 bytes (98% savings)
Also included: transparent gzip (zero config), OpenTelemetry tracing, TestQueue/TestWorker in-memory backend for unit tests without running Valkey, and chain()/group()/chord() for job pipeline workflows.
Companion package: @glidemq/dashboard — drop createDashboard([q1, q2]) into any Express (or any other framework) app and get a live REST + SSE UI across all your queues (avifenesh/glidemq-dashboard on GitHub).
Would love brutal testing on it abilities — especially from anyone burned by queue scaling in production.