frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Peaceful Transfer of Power in Open Source Projects

https://shkspr.mobi/blog/2025/11/the-peaceful-transfer-of-power-in-open-source-projects/
98•edent•2h ago•53 comments

Emoji Evidence Errors Don't Undo a Murder Conviction–People vs. Harmon

https://blog.ericgoldman.org/archives/2025/11/emoji-evidence-errors-dont-undo-a-murder-conviction...
13•hn_acker•20m ago•1 comments

Launch HN: Mosaic (YC W25) – Agentic Video Editing

https://mosaic.so
10•adishj•38m ago•0 comments

Your Smartphone, Their Rules: App Stores Enable Corporate-Government Censorship

https://www.aclu.org/news/free-speech/app-store-oligopoly
217•pabs3•2h ago•110 comments

Programming the Commodore 64 with .NET

https://retroc64.github.io/
28•mariuz•5d ago•3 comments

Gymkhana's 1978 Subaru Brat with 9,500-RPM Redline, Active Aero Is One Super Ute

https://www.thedrive.com/news/gymkhanas-1978-subaru-brat-with-9500-rpm-redline-and-active-aero-is...
32•PaulHoule•1w ago•29 comments

The $1k AWS Mistake

https://www.geocod.io/code-and-coordinates/2025-11-18-the-1000-aws-mistake/
135•thecodemonkey•6h ago•141 comments

Gemini 3

https://blog.google/products/gemini/gemini-3/
1566•preek•1d ago•982 comments

Multimodal Diffusion Language Models for Thinking-Aware Editing and Generation

https://github.com/tyfeld/MMaDA-Parallel
89•lnyan•6h ago•6 comments

The Future of Programming (2013) [video]

https://www.youtube.com/watch?v=8pTEmbeENF4
84•jackdoe•6d ago•48 comments

I made a down detector for down detector

https://downdetectorsdowndetector.com
384•gusowen•16h ago•122 comments

How to Stay Sane in a World That Rewards Insanity

https://www.joanwestenberg.com/p/how-to-stay-sane-in-a-world-that-rewards-insanity
55•enbywithunix•1h ago•25 comments

Proxmox Virtual Environment 9.1 available

https://www.proxmox.com/en/about/company-details/press-releases/proxmox-virtual-environment-9-1
49•speckx•1h ago•30 comments

Google Antigravity

https://antigravity.google/
984•Fysi•1d ago•959 comments

I just want working RCS messaging

https://wt.gd/i-just-want-my-rcs-messaging-to-work
197•joecool1029•14h ago•191 comments

Pimped Amiga 500

https://www.pimyretro.org/pimped-amiga-500/
68•onename•4h ago•28 comments

Europe is scaling back its landmark privacy and AI laws

https://www.theverge.com/news/823750/european-union-ai-act-gdpr-changes
34•ksec•1h ago•16 comments

Pebble, Rebble, and a path forward

https://ericmigi.com/blog/pebble-rebble-and-a-path-forward/
439•phoronixrly•22h ago•225 comments

Show HN: Browser-based interactive 3D Three-Body problem simulator

https://trisolarchaos.com/?pr=O_8(0.6)&n=3&s=5.0&so=0.00&im=rk4&dt=1.00e-4&rt=1.0e-6&at=1.0e-8&bs...
188•jgchaos•1d ago•74 comments

Ultima VII Revisited

https://github.com/ViridianGames/U7Revisited
169•erickhill•1w ago•50 comments

Itiner-e: A high-resolution dataset of roads of the Roman Empire

https://www.nature.com/articles/s41597-025-06140-z
20•benbreen•1w ago•4 comments

Learning to Boot from PXE

https://blog.imraniqbal.org/learning-to-boot-from-pxe/
17•speckx•4h ago•11 comments

Blender 5.0

https://www.blender.org/download/releases/5-0/
910•FrostKiwi•18h ago•292 comments

I wrote a Pong game in a 512-byte boot sector

https://akshatjoshi.com/i-wrote-a-pong-game-in-a-512-byte-boot-sector/
76•akshat666•4d ago•13 comments

Cloudflare outage on November 18, 2025 post mortem

https://blog.cloudflare.com/18-november-2025-outage/
1326•eastdakota•16h ago•781 comments

Gemini 3 Pro Model Card [pdf]

https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-3-Pro-Model-Card.pdf
252•virgildotcodes•1d ago•324 comments

The code and open-source tools I used to produce a science fiction anthology

https://compellingsciencefiction.com/posts/the-code-and-open-source-tools-i-used-to-produce-a-sci...
181•mojoe•23h ago•30 comments

Mojo-V: Secret Computation for RISC-V

https://github.com/toddmaustin/mojo-v
48•fork-bomber•1w ago•15 comments

Cloudflare Global Network experiencing issues

https://www.cloudflarestatus.com/incidents/8gmgl950y3h7
2397•imdsm•1d ago•1620 comments

I made a downdetector for downdetector's downdetector's downdetector

https://downdetectorsdowndetectorsdowndetectorsdowndetector.com
70•halgir•6h ago•12 comments
Open in hackernews

Loading Pydantic models from JSON without running out of memory

https://pythonspeed.com/articles/pydantic-json-memory/
134•itamarst•6mo ago

Comments

thisguy47•6mo ago
I'd like to see a comparison of ijson vs just `json.load(f)`. `ujson` would also be interesting to see.
itamarst•6mo ago
For my PyCon 2025 talk I did this. Video isn't up yet, but slides are here: https://pythonspeed.com/pycon2025/slides/

The linked-from-original-article ijson article was the inspiration for the talk: https://pythonspeed.com/articles/json-memory-streaming/

tomrod•6mo ago
I have a side question -- what did you use for slides?
itamarst•6mo ago
https://remarkjs.com/
fjasdfas•6mo ago
So are there downsides to just always setting slots=True on all of my python data types?
itamarst•6mo ago
You can't add extra attributes that weren't part of the original dataclass definition:

  >>> from dataclasses import dataclass
  >>> @dataclass
  ... class C: pass
  ... 
  >>> C().x = 1
  >>> @dataclass(slots=True)
  ... class D: pass
  ... 
  >>> D().x = 1
  Traceback (most recent call last):
    File "<python-input-4>", line 1, in <module>
      D().x = 1
      ^^^^^
  AttributeError: 'D' object has no attribute 'x' and no __dict__ for setting new attributes
Most of the time this is not a thing you actually need to do.
masklinn•6mo ago
Also some of the introspection stops working e.g. vars().

If you're using dataclasses it's less of an issue because dataclasses.asdict.

monomial•6mo ago
I rarely need to dynamically add attributes myself on dataclasses like this but unfortunately this also means things like `@cached_property` won't work because it can't internally cache the method result anywhere.
franga2000•6mo ago
IIRC you can just include a __dict__ slot and @cached_property should start working again. I
jmugan•6mo ago
My problem isn't running out of memory; it's loading in a complex model where the fields are BaseModels and unions of BaseModels multiple levels deep. It doesn't load it all the way and leaves some of the deeper parts as dictionaries. I need like almost a parser to search the space of different loads. Anyone have any ideas for software that does that?
causasui•6mo ago
You probably want to use Discriminated Unions https://docs.pydantic.dev/latest/concepts/unions/#discrimina...
jmugan•6mo ago
Yeah, I'm doing that
enragedcacti•6mo ago
The only reason I can think of for the behavior you are describing is if one of the unioned types at some level of the hierarchy is equivalent to Dict[str, Any]. My understanding is that Pydantic will explore every option provided recursively and raise a ValidationError if none match but will never just give up and hand you a partially validated object.

Are you able to share a snippet that reproduces what you're seeing?

jmugan•6mo ago
That's an interesting idea. It's possible there's a Dict[str,Any] in there. And yeah, my assumption was that it tried everything recursively, but I just wasn't seeing that, and my LLM council said that it did not. But I'll check for a Dict[str,Any]. Unfortunately, I don't have a minimal example, but making one should be my next step.
enragedcacti•6mo ago
One thing to watch out for while you debug is that the default 'smart' mode for union discrimination can be very unintuitive. As you can see in this example, an int vs a string can cause a different model to be chosen two layers up even though both are valid. You may have perfectly valid uses of Dict within your model that are being chosen in error because they result in less type coercion. left_to_right mode (or ideally discriminated unions if your data has easy discriminators) will be much more consistent.

    >>> class A(BaseModel):
    >>>     a: int
    >>> class B(BaseModel):
    >>>     b: A
    >>> class C(BaseModel):
    >>>     c: B | Dict[str, Any]

    >>> C.model_validate({'c':{'b':{'a':1}}})
    
    C(c=B(b=A(a=1)))

    >>> C.model_validate({'c':{'b':{'a':"1"}}})

    C(c={'b': {'a': '1'}})

    >>> class C(BaseModel):
    >>>     c: B | Dict[str, Any] = Field(union_mode='left_to_right')
    
    >>> C.model_validate({'c':{'b':{'a':"1"}}})

    C(c=B(b=A(a=1)))
cbcoutinho•6mo ago
At some point, we have to admit we're asking too much from our tools.

I know nothing about your context, but in what context would a single model need to support so many permutations of a data structure? Just because software can, doesn't mean it should.

shakna•6mo ago
Anything multi-tenant? There's a reason Salesforce is used for so many large organisations. The multi-nesting lets you account for all the descrepancies that come with scale.

Just tracking payments through multiple tax regions will explode the places where things need to be tweaked.

not_skynet•6mo ago
going to shamelessly plug my own library here: https://github.com/mivanit/ZANJ

You can have nested dataclasses, as well as specify custom serializers/loaders for things which aren't natively supported by json.

jmugan•6mo ago
Ah, but I need something JSON-based.
not_skynet•6mo ago
It does allow dumping to/recovering from json, apologies if that isn't well documented.

Calling `x: str = json.dumps(MyClass(...).serialize())` will get you json you can recover to the original object, nested classes and custom types and all, with `MyClass.load(json.loads(x))`

m_ke•6mo ago
Or just dump pydantic and use msgspec instead: https://jcristharif.com/msgspec/
itamarst•6mo ago
msgspec is much more memory efficient out of the box, yes. Also quite fast.
mbb70•6mo ago
A great feature of pydantic are the validation hooks that let you intercept serialization/deserialization of specific fields and augment behavior.

For example if you are querying a DB that returns a column as a JSON string, trivial with Pydantic to json parse the column are part of deser with an annotation.

Pydantic is definitely slower and not a 'zero cost abstraction', but you do get a lot for it.

jtmcivor•6mo ago
One approach to do that in msgspec is described here https://github.com/jcrist/msgspec/issues/375#issuecomment-15...
aitchnyu•6mo ago
Can it do incremental parsing? Cant tell from a brief look.
jtmcivor•6mo ago
IIUC:

* You still need to load all the bytes into memory before passing to msgspec decoding

* You can decode a subset of fields, which is really helpful

* Reusing msgspec decoders saves some cpu cycles https://jcristharif.com/msgspec/perf-tips.html#reuse-encoder...

Slides 17, 18, 19 have an example of the first two points https://pythonspeed.com/pycon2025/slides/#17

zxilly•6mo ago
Maybe using mmap would also save some memory, I'm not quite sure if this can be implemented in Python.
itamarst•6mo ago
Once you switch to ijson it will not save any memory, no, because ijson essentially uses zero memory for the parsing. You're just left with the in-memory representation.
dgan•6mo ago
i gave up on python dataclasses & json. Using protobufs object within the application itself. I also have a "...Mixin" class for almost every wire model, with extra methods

Automatic, statically typed deserialization is worth the trouble in my opinion

fidotron•6mo ago
Having only recently encountered this, does anyone have any insight as to why it takes 2GB to handle a 100MB file?

This looks highly reminiscent (though not exactly the same, pedants) of why people used to get excited about using SAX instead of DOM for xml parsing.

itamarst•6mo ago
I talk about this more explicitly in the PyCon talk (https://pythonspeed.com/pycon2025/slides/ - video soon) though that's not specifically about Pydantic, but basically:

1. Inefficient parser implementation. It's just... very easy to allocate way too much memory if you don't think about large-scale documents, and very difficult to measure. Common problem with many (but not all) JSON parsers.

2. CPython in-memory representation is large compared to compiled languages. So e.g. 4-digit integer is 5-6 bytes in JSON, 8 in Rust if you do i64, 25ish in CPython. An empty dictionary is 64 bytes.

cozzyd•6mo ago
Funny to see awkward array in this context! (And... do people really store giant datasets in json?!?).
jfb•6mo ago
My sweet summer child
chao-•6mo ago
Often the legacy of an engineer (or team) who "did what they had to do" to meet a deadline, and if they wanted to migrate to something better post-launch, weren't allowed to allocate time to go back and do so.

At least JSON or CSV is better than the ad hoc homegrown formats you found at medium-sized companies that came out of the 90's and 00's.

ljm•6mo ago
Some people even use AI-generated JSON as a semantic layer over their SQL.
CJefferson•6mo ago
To take 2GB to parse a 100MB file, we increase file size 20x

Let's imagine the file is mostly full of single digit numbers with no spaces (so lists like 2,4,1,0,9,3...).

We need to spend 40 bytes storing a number.

Make a minimal sized class to store an integer:

    class JsonInt:
        x = 1
That object's size is already 48 bytes.

Usually we store floats from JSON, the size of 1 as a float in python is 24 bytes.

Now, you can get smaller, but as soon as you introduce any kind of class structure or not parsing numbers until they are used (in case you want people to be able to intrepret them as ints or floats), you blow through 20x memory size increase.

fidotron•6mo ago
> We need to spend 40 bytes storing a number.

But . . . why? Assuming they aren't BigInts or similar these are maximum 8 bytes of actual data. This overhead is ridiculous.

Using classes should enable you to be much smaller than the JSON representation, not larger. For example, V8 does it like https://v8.dev/docs/hidden-classes

> not parsing numbers until they are used

Doesn't this defeat the point of pydantic? It's supposed to be checking the model is valid as it's loaded using jiter. If the data is valid it can be loaded into an efficient representation, and if it's not the errors can be emitted during iterating over it.

jerf•6mo ago
"But . . . why?"

This is CPython. This is how it works. It's not particularly related to JSON. That sort of overhead is put on everything. It just hurts the most when the thing you're putting the overhead on is a single integer. It hurts less when you're doing it to, say, a multi-kilobyte string.

Even in your v8 example, that's a JIT optimization, not "how the language works". You break that optimization, which you can do at any moment with any change in your code base, you're back to similar sizes.

Boxing everything lets you easily implement the dynamic scripting language's way of treating everything as an Object of some sort, but it comes at a price. There's a reason dynamic scripting languages, even after the JIT has come through, are generally substantially slower languages. This isn't the only reason, but it's a significant part of it.

fidotron•6mo ago
> Even in your v8 example, that's a JIT optimization, not "how the language works". You break that optimization, which you can do at any moment with any change in your code base, you're back to similar sizes.

The whole point of the v8 optimization is it works in the face of prototype chains that merge etc. as you add new fields dynamically so if you change your code base it adapts.

deepsquirrelnet•6mo ago
Alternatively, if you had to go with json, you could consider using jsonl. I think I’d start by evaluating whether this is a good application for json. I tend to only want to use it for small files. Binary formats are usually much better in this scenario.
kayson•6mo ago
How does the speed of the dataclass version compare?
scolvin•6mo ago
Pydantic author here. We have plans for an improvement to pydantic where JSON is parsed iteratively, which will make way for reading a file as we parse it. Details in https://github.com/pydantic/pydantic/issues/10032.

Our JSON parser, jiter (https://github.com/pydantic/jiter) already supports iterative parsing, so it's "just" a matter of solving the lifetimes in pydantic-core to validate as we parse.

This should make pydantic around 3x faster at parsing JSON and significantly reduce the memory overhead.

Lucasoato•6mo ago
Pydantic is a life changing library, thanks so much for your work!
adeeshaek•6mo ago
Seconded. Please keep up the awesome work!
itamarst•5mo ago
That's great! Would also be cool (separately from Pydantic use case) to add jiter backend to ijson.