> The method gives researchers a “map” of all the correlations in the model, allowing them to identify and remove specific bits of information with precision. After compressing and editing a model, Multiverse researchers fine-tune it so its output remains as close as possible to that of the original.
This seems to be the substance but I didn’t spot a link to a paper. Is there a technical explanation someplace?
edit: it's really light on details. They have some graphs on reduction and a few (old) benchmarks where supposedly they don't lose much accuracy, but with such old models being listed, it's hard to know. More of a "promo pamphlet" than a paper tbh.
imputation•1h ago
fragmede•1h ago