frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

FastVLM: Efficient Vision Encoding for Vision Language Models

https://machinelearning.apple.com/research/fast-vision-language-models
89•2bit•1d ago

Comments

meatmanek•1d ago
I guess this is the paper / announcement about https://github.com/apple/ml-fastvlm, which was previously discussed in https://news.ycombinator.com/item?id=44661527
yorwba•1d ago
I think you meant to link to https://news.ycombinator.com/item?id=43968897
meatmanek•20h ago
Oops, you are correct.
godelski•23h ago
Personally, I didn't find too much value in this paper. I think it is good as a product demonstration, I just don't know if it added a ton of value into the research space (but maybe it did because people have been making the same mistake for awhile?).

I actually think the linked page makes it very easy to understand my main critique. The main problem here is that downscaling is a destructive process. It destroys information. Zoom in on that sign, can you read it?[0] No! But can you in the high res?[1] Of course!

We can of course train the model on those signs alone and then get it to recognize what the sign should say, the same way you might do this (not by reading words, but by reading the symbol), but we may run into problems when downsampling images, especially with subtle biases that those algorithms can create, which even includes tiling[3].

If the main thesis is "training on larger resolution results in better performance on high resolution images" then this seems to be a conclusion we already knew from a pure mathematical understanding of entropy, and is something many researchers have been discussing for decades.

There are a lot of evaluations here but it is not explicitly clear to me that the architecture is playing the main role. There is very little in the ablation study and a larger focus on dataset coverage. Table 1 is difficult interpret. While I commend the fine tuning of ViT it would not distinguish the entropy problem as (IIRC) VIT was pretrained on 224x224 resolution images and then fine-tuned to a higher resolution. More fine tuning isn't going to make that problem go away. Table 2 helps us understand pooling but does more in terms of dataset coverage than the coverage of solution space.

I don't think it is bad in the way of "this is not a useful thing that was built" but "the way this is communicated makes it difficult for me as a researcher to interpret the reason for these results." In a way, my criticism here is much more general than just this paper. I am frustrated with the recent trends in AI research that there is more focus being put into coverage of datasets over interpretation. Interpretation such as more in depth ablations (e.g. holding variables constant, changing specific parameters for a test[4]). There isn't infinite compute, so I'm not expecting the world. But in the trade-off between dataset coverage and more thorough ablations, I'd significantly prefer the latter. It is entirely possible that the architectural changes here are critical to the model's ability to properly encode the information. There are hints at it in the paper but it is difficult to distinguish form training procedures and simply the entropy. There's many moving parts and the information provided is not enough to distinguish (or distinguish to an acceptable threshold). I don't entirely blame researchers for making their choice in trade-offs, we can't encourage more in depth ablations until reviewers stop using "what about x dataset" as a excuse[5]. This paradigm of dataset coverage really feels like a lot of wasted compute. And honestly, I suspect we'd make far more improvements were we to change paradigms, as well as many of those improvements would come from much smaller labs without these large compute resources.

[0] Small Res: http://0x0.st/8nU3.png

[1] High Res: https://0x0.st/8nUE.png

[2] https://www.cs.cmu.edu/~clean-fid/

[3] https://arxiv.org/abs/2104.05704

[4] It would be nice to change one parameter at a time but sometimes things are coupled.

[5] "I'm curious about performance on x dataset because x dataset has y quality that I think is important" is a perfectly fine critique. But I rarely see that type of criticism in reviews. They include the demand but not the motivation for the demand. Just leads to noisy reviewing as an author can't infer if reviewer is asking because they're lazy or because they think lack of inclusion undermines the author's claims.

imtringued•16h ago
>If the main thesis is "training on larger resolution results in better performance on high resolution images" then this seems to be a conclusion we already knew from a pure mathematical understanding of entropy, and is something many researchers have been discussing for decades.

I think you missed the part where the word performance is doing double duty here. Performance as in accuracy of the result and performance as in the time it takes to achieve said result.

The expectation is that training on a larger resolution will worsen performance in the second sense. You also mentioned that downsampling images will destroy information, hence FastVLM should also perform worse in the first sense, since it is clearly running its transformer layers on downsampled images through the patch embedding halving the image resolution with each layer.

To be fair, the presented network architecture does not really look like anything special. Three CNN layers with two transformer layers is just good product engineering. The real insight to be had here is that writing your own custom downsampling algorithm is a waste of time. You should make the downsampling learnable and part of the model.

godelski•6h ago
Sorry, I should clarify

  > The expectation is that training on a larger resolution will worsen performance in the second sense.

  > downsampling images will destroy information, hence FastVLM should also perform worse in the first sense
I do not think these are in contention. By training on larger images or embedding subnetwork can better learn to embed the requisite information. It need not hurt performance, in the sense of inference speed. This would require wise inference speeds were everything held equal or we just naively scaled. But it can actually be better if the learned algorithm is more efficient at extracting information, where there's the advantage of having access to more information. The larger resolution photo simply contains more information. On the other hand, if you train a model for a different downsampling task that information may not transfer well to the new downsampling task, which makes finetuning tricky and insufficient for a hard conclusion.

Note that their model is smaller. That actually can give us good analysis opportunities, as this suggests what I'm implying: more efficient embedding.

  > Three CNN layers with two transformer layers is just good product engineering. The real insight to be had here is that writing your own custom downsampling algorithm is a waste of time. You should make the downsampling learnable and part of the model.
Actually that's the reason I linked [3] is because it reminded me of that paper. They used an overlapping (convolution) patch-and-embed method in the ViT model as opposed to the hard standard partitioning. Which in effect, is the same conclusion: learn your downsampler (embedder)

I think we're pretty much in agreement. I just really want to see more ablations

Analyzing All of the Words Found on NYC Streets

https://pudding.cool/2025/07/street-view/
1•colinprince•37s ago•0 comments

The post I hoped to avoid. The end of C&H Subreddit

https://old.reddit.com/r/calvinandhobbes/comments/1m8j09a/the_post_i_hoped_to_avoid_the_end_of_ch_subreddit/
1•tarsius•2m ago•0 comments

Shared Kitchens Are Still Rising: Our Legacy with RFBC and What's Next

https://www.thefoodcorridor.com/blog/rfbc-whats-next/
1•mooreds•5m ago•0 comments

Spotify shares pop 13% after company reports first profitable year

https://www.cnbc.com/2025/02/04/spotify-shares-pop-10percent-after-company-reports-first-profitable-year.html
1•wslh•5m ago•0 comments

Opal: Build, edit and share mini-AI apps using natural language

https://opal.withgoogle.com/
1•mooreds•5m ago•0 comments

Paket Wisata Dieng

https://dieng.miraheze.org/wiki/Paket_Wisata_Dieng
1•imaade•6m ago•2 comments

Canada's oil sands transformed into one of North America's lowest-cost plays

https://www.reuters.com/business/energy/how-canadas-oil-sands-transformed-into-one-north-americas-lowest-cost-plays-2025-07-16/
1•rguiscard•8m ago•0 comments

Alto turns your Apple Notes into a website

https://alto.so/
1•colinprince•9m ago•0 comments

Show HN: Is Anthropic Down?

https://isanthropicdown.com/
2•ymichael•10m ago•0 comments

Study links caffeine intake to decreased antibiotic potency in common bacteria

https://phys.org/news/2025-07-links-caffeine-intake-decreased-antibiotic.html
1•Jimmc414•10m ago•2 comments

Sigsegv as control flow – How the JVM optimizes your null checks (2015)

https://jcdav.is/2015/10/06/SIGSEGV-as-control-flow/
1•zbentley•11m ago•0 comments

How big can I print my image?

https://maurycyz.com/misc/printing/
1•LorenDB•12m ago•0 comments

I stumbled into the effective oxygen percentage by looking at model errors

https://getfast.ai/blogs/altitude-correction
1•tmulc18•14m ago•0 comments

Beyond RRF: Improving Hybrid Search by Up to 7.8%

https://www.topk.io/blog/20250724-beyond-rff-how-topk-improves-hybrid-search-quality?trk=feed_main-feed-card_reshare_feed-article-content
1•gk1•16m ago•0 comments

The Saga of Multicore OCaml [video]

https://www.youtube.com/watch?v=XGGSPpk1IB0
1•Shoop•17m ago•0 comments

Ask HN: Are You Happy?

5•chistev•24m ago•3 comments

Renting Is for Suckers

https://andrewkelley.me/post/renting-is-for-suckers.html
3•Bogdanp•26m ago•1 comments

TaxCalcBench: Can AI file your taxes? (not yet)

https://arxiv.org/abs/2507.16126
1•michaelrbock•26m ago•0 comments

Building a Blog with Mocha

https://buildingwith.mocha.app/blog
1•bluesnowmonkey•32m ago•0 comments

InstructVLA: Vision-Language-Action Instruction Tuning

https://yangs03.github.io/InstructVLA_Home/
1•chrsw•39m ago•0 comments

How to Surf the Web in 2025, and Why You Should

https://www.raptitude.com/2025/06/how-to-surf-the-web-in-2025-and-why-you-should/
1•imgabe•39m ago•0 comments

Structllm – structured output support to any LLM provider

https://github.com/piotrmaciejbednarski/structllm
1•piotrbednarski•44m ago•1 comments

Public payment infrastructures: Lessons from Brazil's Pix (2022) [pdf]

https://www.bis.org/publ/bisbull52.pdf
2•felineflock•56m ago•0 comments

I'm Creating a Programming Language

https://github.com/kvthweatt/FluxLang
1•kvthweatt•56m ago•0 comments

EPA rescinds $20M for clean water in pesticide-contaminated rural California

https://www.theguardian.com/us-news/2025/jul/24/water-pesticide-polution-california-trump
4•tzs•57m ago•0 comments

Mobile Bess Powers Remote Heavy Equipment

https://spectrum.ieee.org/mobile-bess
2•defrost•58m ago•0 comments

Leah Remini: Leaked Scientology policies direct lawyers in religious warfare

https://tonyortega.substack.com/p/leah-remini-leaked-scientology-policies
4•PaulHoule•1h ago•0 comments

Google and Microsoft Trusted Them. 2.3M Users Installed Them. They Were Malware

https://blog.koi.security/google-and-microsoft-trusted-them-2-3-million-users-installed-them-they-were-malware-fb4ed4f40ff5
4•drabbiticus•1h ago•2 comments

Scientists are developing artificial blood that could save lives in emergencies

https://www.npr.org/sections/shots-health-news/2025/07/24/nx-s1-5477632/artificial-blood-hemorrhage-emergency-medicine
1•tagawa•1h ago•0 comments

Transhumanism Should Focus on Inequality, Not Living Forever

https://undark.org/2025/07/23/opinion-transhumanism-inequality/
2•EA-3167•1h ago•0 comments