frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Apple Research unearthed forgotten AI technique and using it to generate images

https://9to5mac.com/2025/06/23/apple-ai-image-model-research-tarflow-starflow/
99•celias•3d ago

Comments

celias•3d ago
Paper at https://machinelearning.apple.com/research/normalizing-flows
rfv6723•4h ago
Apple AI team keeps going against the bitter lesson and focusing on small on-device models.

Let's see how this would turn out in longterm.

echelon•4h ago
Edge compute would be clutch, but Apple feels a decade too early.
7speter•1h ago
Maybe for a big llm, but if they add some gpu cores and added a magnitude or 2 more unified memory to their i devices, or shoehorned m socs into high tier iDevices (especially as their lithography process advances), image generation becomes more viable, no? Also, I thought I read somewhere that apple wanted to infer simpler queries locally and switch to datacenter inference when the request was more complicated.

If they approach things this way, and transistor progress continues linearly (relative to the last few years) maybe they can make their first devices that can meet these goals in… 2-3 years?

sipjca•4h ago
somewhat hard to say how the cards fall when the cost of 'intelligence' is coming down 1000x year over year while at the same time compute continues to scale. the bet should be made on both sides probably
furyofantares•3h ago
10x year over year, not 1000x, right? The 1000x is from this 10x observation having held for 3 years.
peepeepoopoo137•3h ago
"""The bitter lesson""" is how you get the current swath of massively unprofitable AI companies that are competing with each other over who can lose money faster.
furyofantares•3h ago
I can't tell if you're perpetuating the myth that these companies are losing money on their paid offerings, or just overestimating how much money they lose on their free offerings.
janalsncm•5m ago
If it costs you a billion dollars to train a GPT5 and I can distill your model for a million dollars and get 90% of the performance, that’s a terrible deal for you. Or more realistically, whoever you borrowed from.
janalsncm•9m ago
The bitter-er lesson is that distillation from bigger models works pretty damn well. It’s great news for the GPU poor, not great for the guys training the models we distill from.
tiahura•4h ago
https://github.com/bayesiains/nflows
imoverclocked•4h ago
It’s pretty great that despite having large data centers capable of doing this kind of computation, Apple continues to make things work locally. I think there is a lot of value in being able to hold the entirety of a product in hand.
xnx•3h ago
Google has a family of local models too! https://ai.google.dev/gemma/docs
coliveira•2h ago
It's very convenient for Apple to do this: less expenses on costly AI chips, and more excuses to ask customers to buy their latest hardware.
nine_k•2h ago
Users have to pay for the compute somehow. Maybe by paying for models run in datacenters. Maybe paying for hardware that's capable enough to run models locally.
Bootvis•1h ago
I can upgrade to a bigger LLM I use through an API with one click. If it runs on my device device I need to buy a new phone.
nine_k•1h ago
I* can run the model on my device, no matter if I have an internet connection, nor if I have a permission from whoever controls the datacenter. I can run the model against highly private data while being certain that the private data never leaves my device.

It's a different set of trade-offs.

* Theoretically; I don't own an iPhone.

lostlogin•1h ago
But also: if Apple's way works, it’s incredibly wasteful.

Server side means shared resources, shared upgrades and shared costs. The privacy aspect matters, but at what cost?

shakna•40m ago
Server side means an excuse to not improve model handling everywhere you can, and increasing global power usage by noticable percentage point, at a time when we're approaching "point of no return" with burning out the only planet we can live on.

The cost, so far, is greater.

nextaccountic•4h ago
This subject is fascinating and the article is informative, but I wish that HN had a button like "flag", but specific for articles that seems written by AI (well at least the section "How STARFlow compares with OpenAI’s 4o image generator" sounds like it)
CharlesW•3h ago
FWIW, you can always report any HN quality concerns to hn@ycombinator.com and it'll be reviewed promptly and fairly (IMO).
Veen•2h ago
It reads like the work of a professional writer who uses a handful of variant sentence structures and conventions to quickly write an article. That’s what professional writers are trained to do.
janalsncm•12m ago
I had the opposite reaction, it definitely reads like a tech journalist who doesn’t have a great understanding of the tech. AI would’ve written a less clunky (and possibly incorrect) explanation.
kelseyfrog•4h ago
Forgotten from like 2021? NVAE[1] was a great paper but maybe four years is long enough to be forgotten in the AI space? shrug

1. NVAE: A Deep Hierarchical Variational Autoencoder https://arxiv.org/pdf/2007.03898

bbminner•4h ago
Right, it is bizzare to read that someone "unearthed a forgotten AI technique" that you happened to have worked with/on when it was still hot - when did I become a fossil? :D

Also, if we're being nitpicky, diffusion model inference has been proven equivalent to (and is often used as) a particular NF so.. shrug

nabla9•2h ago
They are both variational inference, but Normalizing Flow (NF) is not VAE.
kelseyfrog•1h ago
If you read the paper, you'll find "More Expressive Approximate Posteriors with Normalizing Flows" is in the methods section. The authors are in fact using (inverse) normalizing flows within the context of VAEs.

The appendix goes on to explain, "We apply simple volume-preserving normalizing flows of the form z′ = z + b(z) to the samples generated by the encoder at each level".

MBCook•3h ago
I wonder if it’s noticeably faster or slower than the common way on the same set of hardware.
b0a04gl•2h ago
flows make sense here not just for size but cuz they're fully invertible and deterministic. imagine running same gen on 3 iphones, same output. means apple can kinda ensure same input gives same output across devices, chips, runs. no weird variance or sampling noise. good for caching, testing, user trust all that. fits apple's whole determinism dna and more of predictable gen at scale
yorwba•1h ago
Normalizing flows generate samples by starting from Gaussian noise and passing it through a series of invertible transformations. Diffusion models generate samples by starting from Gaussian noise and running it through an inverse diffusion process.

To get deterministic results, you fix the seed for your pseudorandom number generator and make sure not to execute any operations that produce different results on different hardware. There's no difference between the approaches in that respect.

bitpush•2h ago
I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race. The title is meant to say that Apple found something unique that other people missed, when the simplest explanation is they started working on this a while back (2021 paper, afterall) and just released it.

A more accurate headline would be - Apple starting to create images using 4 year old techniques.

danhau•1h ago
This „4 year old technique“ apparently could give Apple an edge for on-device workloads.

> short: both Apple and OpenAI are moving beyond diffusion, but while OpenAI is building for its data centers, Apple is clearly building for our pockets.

bitpush•1h ago
The same edge Apple had summarizing notifications so poorly that they had to turn it off?

https://arstechnica.com/apple/2024/11/apple-intelligence-not...

janalsncm•19m ago
That was a bad and unnecessary feature but the privacy benefits of running a model on device rather than in the cloud are undeniable.
politelemon•1h ago
> I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race.

A glance through the comments also shows HNers doing their best too. The mind still boggles as to why this site is so willing to perform mental gymnastics for a corporate.

rTX5CMRXIfFG•1h ago
That site's target market is what we know as "Apple fanboys". I'm not one to consider 9to5 serious journalism (nor even worthy to post in HN), but even those publications that I consider serious are businesses, too, and need to pander to their markets in order to make money.

XSLT – Native, zero-config build system for the Web

https://github.com/pacocoursey/xslt
110•_kush•2h ago•49 comments

Biomolecular shifts occur in our 40s and 60s (2024)

https://med.stanford.edu/news/all-news/2024/08/massive-biomolecular-shifts-occur-in-our-40s-and-60s--stanford-m.html
124•fzliu•3h ago•43 comments

New EU rules on digital accessibility to come into force

https://www.rte.ie/news/technology/2025/0627/1520552-digital-accessibility/
32•austinallegro•59m ago•15 comments

AlphaGenome: AI for better understanding the genome

https://deepmind.google/discover/blog/alphagenome-ai-for-better-understanding-the-genome/
436•i_love_limes•17h ago•141 comments

A lumberjack created more than 200 sculptures in Wisconsin's Northwoods

https://www.smithsonianmag.com/travel/when-a-lumberjacks-imagination-ran-wild-he-created-more-than-200-sculptures-in-wisconsins-northwoods-180986840/
52•noleary•5h ago•20 comments

Launch HN: Issen (YC F24) – Personal AI language tutor

255•mariano54•17h ago•223 comments

Sailing the fjords like the Vikings yields unexpected insights

https://arstechnica.com/science/2025/06/this-archaeologist-built-a-replica-boat-to-sail-like-the-vikings/
37•pseudolus•3d ago•6 comments

“Why is the Rust compiler so slow?”

https://sharnoff.io/blog/why-rust-compiler-slow
165•Bogdanp•12h ago•190 comments

The time is right for a DOM templating API

https://justinfagnani.com/2025/06/26/the-time-is-right-for-a-dom-templating-api/
134•mdhb•12h ago•96 comments

Alternative Layout System

https://alternativelayoutsystem.com/scripts/#same-sizer
220•smartmic•12h ago•27 comments

Show HN: Sink – Sync any directory with any device on your local network

https://github.com/sirbread/sink
23•sirbread•1h ago•34 comments

Denmark to tackle deepfakes by giving people copyright to their own features

https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence
51•tfourb•3h ago•33 comments

Kea 3.0, our first LTS version

https://www.isc.org/blogs/kea-3-0/
82•conductor•11h ago•26 comments

How much slower is random access, really?

https://samestep.com/blog/random-access/
70•sestep•3d ago•33 comments

Starcloud can’t put a data centre in space at $8.2M in one Starship

https://angadh.com/space-data-centers-1
84•angadh•11h ago•121 comments

Collections: Nitpicking Gladiator's Iconic Opening Battle, Part I

https://acoup.blog/2025/06/06/collections-nitpicking-gladiators-iconic-opening-battle-part-i/
37•diodorus•3d ago•12 comments

Parameterized types in C using the new tag compatibility rule

https://nullprogram.com/blog/2025/06/26/
5•ingve•2h ago•0 comments

Bogong moths use a stellar compass for long-distance navigation at night

https://www.nature.com/articles/s41586-025-09135-3
17•Anon84•3d ago•1 comments

Show HN: Magnitude – Open-source AI browser automation framework

https://github.com/magnitudedev/magnitude
92•anerli•13h ago•35 comments

Snow - Classic Macintosh emulator

https://snowemu.com/
234•ColinWright•22h ago•79 comments

Apple Research unearthed forgotten AI technique and using it to generate images

https://9to5mac.com/2025/06/23/apple-ai-image-model-research-tarflow-starflow/
99•celias•3d ago•36 comments

Uv and Ray: Pain-Free Python Dependencies in Clusters

https://www.anyscale.com/blog/uv-ray-pain-free-python-dependencies-in-clusters
11•robertnishihara•1h ago•0 comments

Judge rejects Meta's claim that torrenting is “irrelevant” in AI copyright case

https://arstechnica.com/tech-policy/2025/06/judge-rejects-metas-claim-that-torrenting-is-irrelevant-in-ai-copyright-case/
44•Bluestein•4h ago•31 comments

Typr – TUI typing test with a word selection algorithm inspired by keybr

https://github.com/Sakura-sx/typr
70•Sakura-sx•3d ago•32 comments

A.I. Is Homogenizing Our Thoughts

https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts
185•thoughtpeddler•10h ago•57 comments

VA Tech scientists are building a better fog harp

https://arstechnica.com/science/2025/06/these-va-tech-scientists-are-building-a-better-fog-harp/
4•PaulHoule•3d ago•1 comments

SigNoz (YC W21, Open Source Datadog) Is Hiring DevRel Engineers (Remote)(US)

https://www.ycombinator.com/companies/signoz/jobs/cPaxcxt-devrel-engineer-remote-us-time-zones
1•pranay01•13h ago

A Review of Aerospike Nozzles: Current Trends in Aerospace Applications

https://www.mdpi.com/2226-4310/12/6/519
76•PaulHoule•16h ago•41 comments

Introducing Gemma 3n

https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/
342•bundie•14h ago•146 comments

'Peak flower power era': The story of first ever Glastonbury Festival in 1970

https://www.bbc.com/culture/article/20250620-the-story-of-the-first-ever-glastonbury-festival-in-1970
13•keepamovin•3d ago•2 comments