[1] https://github.com/jk-jeon/dragonbox?tab=readme-ov-file#perf...
The difference between which values are precisely representable in binary and which are precisely representable in decimal means small errors can creep in.
https://randomascii.wordpress.com/2012/02/11/they-sure-look-...
EDIT: If you're talking about decimal->binary->decimal round-tripping, it's a completely different story though.
But how JSON numbers are handled by different parsers might surprise you. This blog post actually does a good job of detailing the subtleties and the choices made in a few standard languages and libraries: https://github.com/bterlson/blog/blob/main/content/blog/what...
I think one particular surprise is that C# and Java standard parsers both use openAPI schema hints that a piece of data is of type ‘number’ to map the value to a decimal floating point type, not a binary one.
Not sure which parser you consider standard, as Java doesn't have one at all (in the standard libraries). Other than that the existing ones just take the target type (not json) when they deserialize into, e.g. int, long, etc.
The de-facto standard is similar to the expectation everyone uses spring boot.
Presumably this is dependent on runtime. You certainly don't need to respect javascript (or any runtime's) parsers if you don't use javascript.
For instance, my current position explicitly uses arbitrary-precision decimals to deserialize json numbers.
The point is that it takes application of algorithms that need to be provably correctly implemented on both ends of any JSON serialization/deserialization. And if one implantation can roundtrip its own floating point values, that's great - but JSON is an interop format, so does it roundtrip if you send it to another system and back?
It's just an unnecessary layer of complexity that binary floating point serializers do not have to worry about.
I think it's less about side effects being common when serializing, just that their fast path avoids anything that could have side effects (like toJSON).
The article touches briefly on this.
That said, I see that they called it out as you say, now. When I first read it, I thought they watched for side effects.
I'm assuming they have an allow list on all standard types. The date types, in particular, often have a toJson that seems like it should still be used? (Or am I wrong on that, too? :D )
People have exploited this sort of side effect to get bug bounties before via type confusion attacks, iirc.
That getter looks inoffensive and will, depending on your requirements, work just fine. But it has side effects because the string interpolation allocates and could trigger a garbage collection.
Note that if you're using modern JS 'class' blocks a 'get x ()' will be ignored by JSON.stringify, so if you're aiming to reproduce this you have to use old-school Object.defineProperty instead.
I'm sure there are plenty of other similar uses that I just don't know about.
That said, if this is really included computed fields, that seems far broader.
It's a real example of "you can solve just about anything with a billion dollars" though :)
I'd prefer JavaScript kept evolving (think "strict", but "stricter", "stricter still", ...) to a simpler and easier to compile/JIT language.
This may already be a thing; node has native TS support by just ignoring or stripping types from the code, TS feature that can't be easily stripped (iirc namespaces, enums) are deprecated and discouraged nowadays.
TS is not actually that special in terms of running it. TS types are for type checking which isn't done at runtime, running TS is just running the JS parts of the code as-is.
For TypeScript it uses the types as hints to the compiler, for example it has int types that alias number.
Very early still, but very cool.
It'd need a ton of buy-in from the whole community and all VM implementors to have a chance at pencilling out in any reasonable time span. Not saying I'm against it, just noting.
>runtime-level APIs
like the Document APIs?
Yeah, the DOM in the browser, node APIs in nodejs, etc.
> especially if it was faster
Well, that's the thing. Initially it'll be slower, 'cause the code you call, and that calls you will be mostly unsound.
Unless you can run the sound code without the soundness checks at boundaries, at which point it becomes harder to reason about, and you'll be tempted to add some additional runtime checks to your code that the type system should catch.
This is / was ASM.js, a limited subset of JS without all the dynamic behaviour which allowed the interpreter to skip a lot of checks and assumptions. This was deprecated in favor of WASM - basically communicating that if you need the strictness or performance, use a different language.
As for JS strictness, eslint / biome with all the rules engaged will also make it strict.
[1] https://source.chromium.org/chromium/_/chromium/v8/v8/+/5cbc...
[2] https://github.com/facebook/folly/commit/2f0cabfb48b8a8df84f...
So array list instead of array?
Hombre, you’re about the best there is.
Nail on the head; a lot of JS overhead is due to its dynamic nature. asm.js disallowed some of this dynamic behaviour (like changing the shape of objects, iirc), meaning they could skip a lot of these checks.
Although java also has the advantage of not having to be ~ real time (i.e. has a compiler)
JSON is a great minimalist format which is both human and machine readable. I never quite understood the popularity of ProtoBuf; the binary format is a major sacrifice to readability. I get that some people appreciate the type validation but it adds a lot of complexity and friction to the transport protocol layer.
It's hard enough to not break logical compatibility, so I appreciate not having to think too hard about wire compat. You can of course solve the same thing with JSON, but, well, YOU have to solve it.
(Also worth noting, there are a lot of things I don't like about the grpc ecosystem so I don't actually use it that much. But this is one of the pieces I really like a lot).
It does seem like some technologies get credit for solving problems that they created.
> You can of course solve the same thing with JSON, but, well, YOU have to solve it.
There is not a single well established convention across all languages/impls. The default behavior in many languages if a field is missing is to either panic, or replace it with a null pointer (which will just panic later, most likely).
I don't disagree that people go for ProtoBuf a bit too eagerly though.
It is readable but it's not a good/fast format. IEEE754<->string is just expensive even w/ all the shortcuts and improvements. byte[]s have no good way to be presented either.
There's probably a lesson in there somewhere.
It's how we used to make websites before SPA, and it's refreshing to see that it still makes a noticeable difference even on today's powerful CPUs and high speed networks.
Any idea why?
The default object property iteration rules in JS define that numeric properties are traversed first in their numeric order, and only then others in the order they were added to the object. Since the numbers need to be in their numeric, not lexical, order, the engine would also need to parse them to ints before sorting.
> JSON.stringify({b: null, 10: null, 1: null, a: null})
'{"1":null,"10":null,"b":null,"a":null}'There's structuredClone (https://developer.mozilla.org/en-US/docs/Web/API/Window/stru... https://caniuse.com/?search=structuredClone) with baseline support (93% of users), but it doesn't work if fields contain DOM objects or functions meaning you might have to iterate over and preprocess objects before cloning so more error-prone, manual and again inefficient?
Svelte has `$state.snapshot()` for this reason I believe.
Once you factor in prefetching, branch prediction, etc., a highly optimized JSON serializer should be effectively free for most real world workloads.
The part where json sucks is IO overhead when modifying blobs. It doesn't matter how fast your serializer is if you have to push 100 megabytes to block storage every time a user changes a boolean preference.
Do we get this even if you call `JSON.stringify(data, null, 0)`? Or do the arguments literally have to be undefined?
https://microsoftedge.github.io/Demos/json-dummy-data/512KB.... Chrome 138.0.7204.184
> On Nov 16, 2008, at 11:57 PM, Peter Michaux wrote:
> > The name "JSON.stringify" seems a little too "Web 2.0" cool to me.
> Hardly. It's hacker dictionary material, old sk00l.
> > Is there any reason a more usual and serious sounding option like "serialize" was not used?
> Ugh, Web 1.0 my-first-Java-serializable-implementation anti-nostalgia vapors.
[0]: https://web.archive.org/web/20100718152229/https://mail.mozi...
Serialize, Marshal, toString: timeless. Stringify: Trendy, gone with the winds...
Imagine how far we could have come if we started from a reasonable basis.
hinkley•6mo ago
Sooner or later is seems like everyone gets the idea of reducing event loop stalls in their NodeJS code by trying to offload it to another thread, only to discover they’ve tripled the CPU load in the main thread.
I’ve seen people stringify arrays one entry at a time. Sounds like maybe they are doing that internally now.
If anything I would encourage the V8 team to go farther with this. Can you avoid bailing out for subsets of data? What about the CString issue? Does this bring faststr back from the dead?
jcdavis•6mo ago
As someone who has come from a JVM/go background, I was kinda shocked how amateur hour it felt tbh.
MehdiHK•6mo ago
That's what I experienced too. But I think the deeper problem is Node's cooperative multitasking model. A preemptive multitasking (like Go) wouldn't block the whole event-loop (other concurrent tasks) during serializing a large response (often the case with GraphQL, but possible with any other API too). Yeah, it does kinda feel like amateur hour.
Rohansi•6mo ago
miroljub•6mo ago
Nowadays Node is JavaScript. They are the guys driving JavaScript standards and new features for a decade or so. Nothing prevents them from incrementally starting to add proper parallelism, multithreading, ...
promiseofbeans•6mo ago
Yes, there have been a lot of advances in modern JS for things like atomics and other fun memory stuff, but that's just the natrual progression for a language as popular as JS. The 3 main JS engines are still developed primarily for web browsers, and web developers are the primary audience considered in ES language discussions (although I'll concede that in recent years server runtimes have been considered more and more)
coldtea•6mo ago
Clients are 90% still v8, so hardly different than Node.
jerf•6mo ago
In principle perhaps not. In practice it is abundantly clear now from repeated experience that trying to retrofit such things on to a scripting language that has been single-threaded for decades is an extremely difficult and error-prone process that can easily take a decade to reach production quality, if indeed it ever does, and then take another decade or more to become something you can just expect to work, expect to find libraries that use properly, etc.
I don't think it's intrinsic to scripting languages. I think someone could greenfield one and have no more problems with multithreading than any other language. It's trying to put it into something that has been single-threaded for a decade or two already that is very, very hard. And to be honest, given what we've seen from the other languages that have done this, I'd have a very, very, very serious discussion with the dev team as to whether it's actually worth it. Other scripting languages have put a lot of work into this and it is not my perception that the result has been worth the effort.
IX-103•6mo ago
Drakim•6mo ago
gwbas1c•6mo ago
NodeJS is intended for IO-heavy workloads. Specifically, it's intended for workloads that don't benefit from parallel processing in the CPU.
This is because Javascript is strictly a single-threaded language; IE, it doesn't support shared access to memory from multiple threads. (And this is because Javascript was written for controlling a UI, and historically UI is all handled on a single thread.)
If you need true multithreading, there are plenty of languages that support it. Either you picked the wrong language, or you might want to consider creating a library in another language and calling into it from NodeJS.
MehdiHK•6mo ago
I didn't say multithreading anywhere. Mutitasking (concurrency) != Multithreading.
You can do pre-emptive concurrency with a single thread in other runtimes, where each task gets a pre-defined amount of CPU time slice, that solves fair scheduling for both IO and CPU-bound workloads. Nobody is supposed to pick NodeJS for CPU-bound workload, but you cannot escape JSON parse/stringify event-loop blocking in practice (which is CPU-bound).
hinkley•6mo ago
Just so. It is, or at least can be, the plurality of the sequential part of any Amdahl's Law calculation for Nodejs.
I'm curious if any of the 'side effect free' commentary in this post is about moving parts of the JSON calculation off of the event loop. That would certainly be very interesting if true.
However for concurrency reasons I suspect it could never be fully off. The best you could likely do is have multiple threads converting the object while the event loop remains blocked. Not entirely unlike concurrent marking in the JVM.
dmit•6mo ago
hinkley•6mo ago
schrodinger•6mo ago
germandiago•6mo ago
I cannot go with such a messy ecosystem. I find Python highly preferrable for my backend code for more or less low and middle traffic stuff.
I know Python is not that good deployment-wise, but the language is really understandable, I have tools for every use case, I can easily provide bindings from C++ code and it is a joy to work with.
If on top of that, they keep increasing its performance, I think I will stick to it for lots of backend tasks (except for high performance, where I have lately been doing with C++ and Capnproto RPC for distributed stuff).
m-schuetz•6mo ago
I would like to love Jupyter notebooks because Notebooks are great for prototyping, but Jupyter and Python plotting libs are so clunky and slow, I always have to fall back to Node or writing a web page with JS and svg for plotting and prototyping.
germandiago•6mo ago
What I like from Python os that I can code fast and put something there that will work, at least for backend and with tools like Poetry.
Another different topic is packaging for other users. That is an entirely different story and way worse than just doing what I do.
jk3000•6mo ago
hnlmorg•6mo ago
- Rapid application development
VB was easier and quicker
- GUI development
At least on Windows, in my opinion, VB is still the best language ever created for that. Borland had a good stab at it with their IDEs but nothing really came close to VB6 in terms of speed and ease of development.
Granted this isn't JS's fault, but CSS et al is just a mess in comparison.
- Cross-platform development
You have a point there. VB6 was a different era though.
- Type safety
VB6 wins here again
- Readability
This is admittedly subjective, but I personally don't find idiomatic node.js code all that readable. VB's ALGOL-inspired roots aren't for everyone but if I personally don't mind Begin/End blocks.
- Consistency
JS has so many weird edge cases. That's not to say that VB didn't have its own quirks. However they were less numerous in my experience.
Then you have inconsistencies between different JS implementations too.
- Concurrency
Both languages fail badly here. Yeah node has async/await but i personally hate that design and, ultimately, node.js is still single-threaded at its core. So while JS is technically better, it's still so bad that I cannot justify giving it the win here.
- Developer familiarity
JS is used by more people.
- Code longevity
Does this metric even deserve a rebuttal given the known problem of Javascript framework churn? You can't even recompile any sizable 2 year old Javascript projects without running into problems. Literally every other popular language trumps Javascript in that regard.
- Developer tooling
VB6 came with everything you needed and worked from the moment you finished the VB Visual Studio install.
With node.js you have a plethora of different moving parts you need to manually configure just to get started.
---
I'm not suggesting people should write new software in VB. But it was unironically a good language for what it was designed for.
Node/JS isn't even a good language for its intended purpose. It's just a clusterfuck of an ecosystem. Even people who maintain core JS components know this -- which is what tooling is constantly being migrated to other languages like Rust and Go. And why some many people are creating businesses around their bespoke JS runtimes aiming to solve the issues that node.js create (and thus creating more problems due to ever-increasing numbers of "standards").
Literally the only redeemable factor of node.js is the network effect of everyone using it. But to me that feels more like Stockholm Syndrome than a ringing endorsement.
And if the best compliment you can give node.js is "it's better than this other ecosystem that died 2 decades ago" then you must realise yourself just how bad things really are.
fkyoureadthedoc•6mo ago
Just an FYI, but Stockholm Syndrome isn't real. In general I agree with the intended point though, people just like what they are familiar with and probably have a bias for what they learned first or used longest.
miroljub•6mo ago
teaearlgraycold•6mo ago
schrodinger•6mo ago
bavell•6mo ago
makeitdouble•6mo ago
A single language to rule them all: on the server, on the client, in the browser, in appliances. It truly was everywhere at some point.
Then people massively wish for something better and move to dedicated languages.
Put another way, for most shops the productivity gains and of having single languages are far from incredible, to being negatives in the most typical settings.
ifwinterco•6mo ago
JS is here to stay as the main scripting language for the web which means there probably will be a place for node as a back end scripting language. A lot of back ends are relatively simple CRUD API stuff where using node is completely feasible and there are real benefits to being able to share type definitions etc across front end and back end
makeitdouble•6mo ago
There are benefits, but cons as well. As you point out, if the backend is only straight proxying the DB, any language will do so you might as well use the same as the frontend.
I think very few companies running for a few years still have backends that simple. At some point you'll want to hide or abstract things from the frontend. Your backend will do more and more processing, more validation, it will handle more and more domain specific logic (tax/money, auditing, scheduling etc). It becomes more and more of a beast on its own and you won't stay stuck with a language which's only real benefit is partially sharing types with the frontend.
lmm•6mo ago
There is a significant gain from running a single language everywhere. Not enough to completely overwhelm everything else - using two good languages will still beat one bad language - but all else being equal, using a single language will do a lot better.
makeitdouble•6mo ago
It made me think about the amount of work that went into JS to make it the powerhouse it is today.
Even in the browser, we're only able to do all these crazy things because of herculian efforts from Google, Apple and Firefox to optimize every corner and build runtimes that have basically the same complexity as the OS they run on, to the point we got Chrome OS as a side product.
From that POV, we could probably take any language, pour that much effort into it and make it a more than decently performing platform. Java could have been that, if we really wanted it hard enough. There just was no incentive to do so for any of the bigger players outside of Sun and Oracle.
> all else being equal, using a single language will do a lot better.
Yes, there will be specific cases where a dedicated server stack is more of a liability. I still haven't found many, tbh. In the most extreme cases, people will turn to platforms like Firebase, and throw money at the pb to completely abstract the server side.
strken•6mo ago
I've only rarely needed to do this. The two examples that stick in my mind are firstly event and calendar logic, and secondly implementing protocols that wrap webrtc.
teaearlgraycold•6mo ago
My blog post here isn’t as good as it should be, but hopefully it gets the point across
https://danangell.com/blog/posts/type-level-api-client/
bubblyworld•6mo ago
On types, I think the real value proposition is having a single source of truth for all domain types but because there's a serialisation layer in the way (http) it's rarely that simple. I've fallen back to typing my frontend explicitly where I need to, way simpler and not that much work.
(basically as soon as you have any kind of context-specific serialisation, maybe excluding or transforming a field, maybe you have "populated" options in your API for relations, etc - you end up writing type mapping code between the BE and FE that tends to become brittle fast)
ChocolateGod•6mo ago
Cthulhu_•6mo ago
TL;DR, this value prop is limited.
Lutger•6mo ago
The maintenance burden shifts from hand syncing types, to setting up and maintaining the often quite complex codegen steps. Once you have it configured and working smoothly, it is a nice system and often worth it in my experience.
The biggest benefit is not the productivity increase when creating new types, but the overall reliability and ease of changing stuff around that already exists.
jelder•6mo ago
I’ve been doing this for a long time and have never once “shared code between front end and back end” but sharing types between languages is the sweet spot.
teaearlgraycold•6mo ago
Just define your API interface as a collection of types that pull from your API route function definitions. Have the API functions pull types from your model layer. Transform those types into their post-JSON deserialization form and now you're trickling up schema from the database right into the client. No client to compile. No watcher to run. It's always in sync and fast to evaluate.
Lutger•6mo ago
Plus, openapi can be useful for other things as well: generating api documentation for example, mock servers or clients in multiple programming languages.
I'm not disagreeing with you, what is best always depends on context and also on the professional judgement of the one who is making the trade-offs. A certain perspective or even taste always slips into these judgement calls as well, which isn't invalid.
brundolf•6mo ago
hinkley•6mo ago
Most tasks take more memory in the middle that at the beginning and end. And if you're sharing memory between processes that can only communicate by setting bytes, then the memory at the beginning and end represents the communication overhead. The latency.
But this is also why things like p-limit work - they pause an array of arbitrary tasks during the induction phase, before the data expands into a complex state that has to be retained in memory concurrent with all of its peers. By partially linearizing you put a clamp on peak memory usage that Promise.all(arr.map(...)) does not, not just the thundering herd fix.
dwattttt•6mo ago
Or I guess you can do it without the WebAssembly step.
hinkley•6mo ago
dwattttt•6mo ago
hinkley•6mo ago
nijave•6mo ago
userbinator•6mo ago
> JSON encoding is a huge impediment to communication
I wonder how much computational overhead JSON'ing adds to communications at a global scale, in contrast to just sending the bytes directly in a fixed format or something far more efficient to parse like ASN.1.
hinkley•6mo ago
hinkley•6mo ago
The number of times I've gone in looking for 10%, backed out a bit and rearranged the code first to find 25%, better maintainability, and space for a feature that marketing has been bitching about for three years and development keeps insisting we cannot do in any reasonable amount of time? Probably averages out to at least three times per employer, which is a good number of miracles to perform.
tgv•6mo ago
That feels the wrong way to go. I would encourage the people that have this problem to look elsewhere. Node/V8 isn't well suited to backend or the heavier computational problems. Javascript is shaped by web usage, and it will stay like that for some time. You can't expect the V8 team to bail them out.
The Typescript team switched to Go, because it's similar enough to TS/JS to do part of the translation automatically. I'm no AI enthousiast, but they are quite good at doing idiomatic translations too.
com2kid•6mo ago
Node was literally designed to be good for one thing - backend web service development.
It is exceptionally good at it. The runtime overhead is tiny compared to the JVM, the async model is simple as hell to wrap your head around and has a fraction of the complexity of what other languages are doing in this space, and Node running on a potato of a CPU can handle thousands of requests per second w/o breaking a sweat using the most naively written code.
Also the compactness of the language is incredible, you can get a full ExpressJS service up and running, including auth, in less than a dozen lines of code. The amount of magic that happens is almost zero, especially compared to other languages and frameworks. I know some people like their magic BS (and some of the stuff FastAPI does is nifty), but Express is "what you see is what you get" by default.
> The Typescript team switched to Go, because it's similar enough to TS/JS to do part of the translation automatically.
The TS team switched to go because JS is horrible at anything that isn't strings or doubles. The lack of an int type hinders the language, so runtimes do a lot of work to try and determine when a number can be treated like an int.
JS's type system is both absurdly flexible and also limiting. Because JS basically allows you to do anything with types, Typescript ends up being one of the most powerful type systems that has seen mass adoption. (Yes other languages have more powerful type systems, but none of them have the wide spread adoption TS does).
If I need to model a problem domain, TS is an excellent tool for doing so. If I need to respond to thousands of small requests, Node is an excellent tool for doing so. If I need to do some actual computation on those incoming requests, eh, maybe pick another tech stack.
But for the majority of service endpoints that consist of "get message from user, query DB, reformat DB response, send to user"? Node is incredible at solving that problem.
tgv•6mo ago
I don't think it was, at least not originally, But even if it was, that doesn't mean it actually is good, and certainly not for all cases.
> Node running on a potato of a CPU can handle thousands of requests per second w/o breaking a sweat using the most naively written code.
The parent comment is specifically about this. It breaks down at a certain point.
> you can get a full ExpressJS service up and running, including auth, in less than a dozen lines of code
Ease of use is nice for a start, but usually becomes technical debt. E.g., you can write a pretty small search algorithm, but it will perform terribly. Not a problem at the start. You can set up a service with a just a little bit of code in any major language using some framework. Heck, there are code free servers. But you will have to add more and more work-arounds as the application grows. There's no free lunch.
> The TS team switched to go because JS is horrible at anything that isn't strings or doubles.
They switched because V8 is too slow and uses quite a bit of memory. At least, that's what they wrote. But that was not what I wanted to address. I was trying to say that if you have to switch, Go is a decent option, because it's so close to JS/TS.
> But for the majority of service endpoints ...
Because they are simple, as you say. But when you run into problems, asking the V8 team to bail you out with a few more hacks doesn't seem right.
com2kid•6mo ago
The difference with the Express ecosystem is that you aren't getting any less power than with FastAPI or Spring Boot, you just get less overhead. Spring Boot has 10x the config to get the same endpoint up and running as Express, and FastAPI has at least 3x the magic. Now some of FastAPI's magic is really useful (auto converting pydantic types to JSON Schemas on endpoints, auto generating API docs, etc), but it is still magic compared to what Express gets you.
The scaling story of Node is also really easy to think about and do capacity planning for. You aren't worried about contention or IPC (as this thread has pointed out, if you are doing IPC in Node you are in for a bad time, so just don't!), your unit of scaling is the Node process itself. Throw it in a docker image, throw that in a k8s cluster, assign .25 CPU to each instance. Scale up and down as needed.
Sometimes having one really damn simple and easy to understand building block is more powerful than having 500 blocks that can be misconfigured in ten thousand different ways.
deadbabe•6mo ago
hinkley•6mo ago
TypedArray is a toy. Very few domains actually work well with this sort of data. Statistics, yes. Games? Do all the time, but also games: are full of glitches used by speed runners, due to games playing fast and loose to maintain an illusion they are doing much more per second than they should be able to.
DataView is a bit better. I am endlessly amazed at how many times I managed to read people talk about TypedArrays and SharedByteArrays before I discovered that DataView exists and has existed basically forever. Somebody should have mentioned it a lot, lot sooner.
deadbabe•6mo ago
zamalek•6mo ago
Why not use structuredClone to communicate with the worker? So long as your object upholds all the rules you can pass it into postMessage directly.
hinkley•6mo ago
zamalek•6mo ago
hinkley•6mo ago
Benchmarks aren’t showing structuredClone favorably versus JSON round tripping. Likely because JSON-compatible data structures are less complex than clonable data. I suspect with this change JSON will now be faster than structuredClone.
zamalek•6mo ago
Which ones?
Structured clone seems to be approximately the same as FF according to this (structuredClone is slightly faster on my machine, probably margin of error): https://measurethat.net/Benchmarks/Show/23052/0/structuredcl... (Linux: FF 141, Chromium 138).
hinkley•6mo ago
cogman10•6mo ago
One thing that improves performance for the JVM that'd be nice to see in node realm is that JSON serialization libraries can stream out the serialization. One of the major costs of JSON is the memory footprint. Strings take up a LOT more space in memory than a regular object does.
Since the JVM typically only uses JSON as a communication protocol, streaming it out makes a lot of sense. The IO (usually) takes long enough to give a CPU reprieve while simultaneously saving memory.