frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Firefox Moves to GitHub

https://github.com/mozilla-firefox/firefox
207•thefilmore•1h ago•114 comments

Fastvlm: Efficient vision encoding for vision language models

https://github.com/apple/ml-fastvlm
218•nhod•6h ago•38 comments

Persuasion methods for engineering managers

https://newsletter.manager.dev/p/5-powerful-persuasion-methods-for
18•Liriel•43m ago•2 comments

Open Hardware Ethernet Switch project, part 1

https://serd.es/2025/05/08/Switch-project-pt1.html
106•luu•3d ago•14 comments

TransMLA: Multi-head latent attention is all you need

https://arxiv.org/abs/2502.07864
43•ocean_moist•3h ago•4 comments

Air Traffic Control

https://computer.rip/2025-05-11-air-traffic-control.html
145•1317•1d ago•41 comments

15 Years of Shader Minification

https://www.ctrl-alt-test.fr/2025/15-years-of-shader-minification/
53•laurentlb•2d ago•8 comments

The Barbican

https://arslan.io/2025/05/12/barbican-estate/
495•farslan•15h ago•171 comments

A conversation about AI for science with Jason Pruet

https://www.lanl.gov/media/publications/1663/0125-qa-jason-pruet
144•LAsteNERD•11h ago•116 comments

Can you trust that permission pop-up on macOS?

https://wts.dev/posts/tcc-who/
257•nmgycombinator•12h ago•180 comments

Revisiting Image Maps

https://css-tricks.com/revisiting-image-maps/
11•thm•3d ago•4 comments

RIP Usenix ATC

https://bcantrill.dtrace.org/2025/05/11/rip-usenix-atc/
163•joecobb•14h ago•34 comments

Understanding LucasArts' iMUSE System

https://github.com/meshula/LabMidi/blob/main/LabMuse/imuse-technical.md
105•todsacerdoti•8h ago•21 comments

NASA study reveals Venus crust surprise

https://science.nasa.gov/science-research/astromaterials/nasa-study-reveals-venus-crust-surprise/
69•mnem•3d ago•75 comments

HealthBench – An evaluation for AI systems and human health

https://openai.com/index/healthbench/
142•mfiguiere•13h ago•123 comments

A community-led fork of Organic Maps

https://www.comaps.app/news/2025-05-12/3/
295•maelito•19h ago•192 comments

Launch HN: ParaQuery (YC X25) – GPU Accelerated Spark/SQL

113•winwang•15h ago•70 comments

Alephic Writing Style Guide

https://www.alephic.com/company/writing
11•otoolep•3d ago•2 comments

University of Texas-led team solves a big problem for fusion energy

https://news.utexas.edu/2025/05/05/university-of-texas-led-team-solves-a-big-problem-for-fusion-energy/
238•signa11•18h ago•162 comments

Reviving a modular cargo bike design from the 1930s

https://www.core77.com/posts/136773/Reviving-a-Modular-Cargo-Bike-Design-from-the-1930s
169•surprisetalk•16h ago•133 comments

Ruby 3.5 Feature: Namespace on read

https://bugs.ruby-lang.org/issues/21311
197•ksec•17h ago•96 comments

Wtfis: Passive hostname, domain and IP lookup tool for non-robots

https://github.com/pirxthepilot/wtfis
85•todsacerdoti•9h ago•4 comments

Writing N-body gravity simulations code in Python

https://alvinng4.github.io/grav_sim/5_steps_to_n_body_simulation/
105•dargscisyhp•2d ago•21 comments

Policy of Transience

https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/transience/
26•pekim•2d ago•1 comments

The Beam

https://www.erlang-solutions.com/blog/the-beam-erlangs-virtual-machine/
64•Alupis•3d ago•12 comments

Demonstrably Secure Software Supply Chains with Nix

https://nixcademy.com/posts/secure-supply-chain-with-nix/
103•todsacerdoti•16h ago•60 comments

Show HN: Lumoar – Free SOC 2 tool for SaaS startups

https://www.lumoar.com
76•asdxrfx•12h ago•28 comments

Universe expected to decay in 10⁷⁸ years, much sooner than previously thought

https://phys.org/news/2025-05-universe-decay-years-sooner-previously.html
201•pseudolus•21h ago•261 comments

FedRAMP 20x – One Month in and Moving Fast

https://www.fedramp.gov/2025-04-24-fedramp-20x-one-month-in-and-moving-fast/
79•transpute•6h ago•51 comments

Continuous glucose monitors reveal variable glucose responses to the same meals

https://examine.com/research-feed/study/1jjKq1/
176•Matrixik•3d ago•100 comments
Open in hackernews

Fastvlm: Efficient vision encoding for vision language models

https://github.com/apple/ml-fastvlm
217•nhod•6h ago

Comments

BryanLegend•5h ago
Seems like the main thing holding these new minds back is being able to see well. Breakthroughs like this will fix that.
efnx•5h ago
That and the ability to hold on to knowledge.
static_void•2h ago
... or say they don't know.
kamranjon•5h ago
Apple out here playing 5d chess, installing neural cores in their hardware and writing crazy efficient vision models to run on em. Cool stuff.
wmf•4h ago
I thought they turned sycophancy off...
kamranjon•4h ago
Awe yes I admit, I think the new Apple hardware is real cool
vFunct•5h ago
Can it fill a wine glass to the rim?
mkl•4h ago
It's for interpreting images, not generating them.
turnsout•4h ago
Apple has gotten a slow start in the LLM world, but they have the only long term strategy that makes sense. They’re going to dominate the 2030s.
boroboro4•4h ago
What exactly the strategy is?
generalizations•4h ago
They can run locally on-device: a win for cost, latency and privacy (privacy is pragmatic: it means you can use all the user's data as context without qualms). There's a reason Microsoft tried so hard to push for the neural processors a year or two ago. Avoiding the cost of the datacenter while offering good-enough inference (emphasis on good) is a massive win.
turnsout•4h ago
Yes, thank you; this is the strategy I was referring to. It will take some time for the models and chips to get there, but on-device inference will have massive advantages for privacy, speed and cost. Plus it will drive demand for hardware—at first, iPhones, but soon AirPods and glasses.
xnx•4h ago
Google already has some of the best on device models (Gemma) and chips (Tensor).
AceJohnny2•2h ago
> and chips (Tensor)

Is there actually any hard data out there comparing the NPU on the Google Tensor G4 vs the Apple A18? I wasn't able to quickly find anything concrete.

I mean Apple has been shipping mobile NPUs for longer than Google (Apple: since A11 in 2017, Google: since 2021), and are built on (ostensibly) a smaller silicon node that Google's (G4: Samsung SF4P vs A18: TSMC N3E). However, the G4 appears to have more RAM bandwidth (68.26 GB/s vs 60 GB/s on A18).

lern_too_spel•1h ago
Google has been shipping custom NPUs since the Pixel 4 in 2019. Prior to that, Google phones just used off the shelf SOCs from Qualcomm, with 2018's Pixel 3 using the NPU in the Snapdragon 845. Android first shipped NNAPI in Android 8.1 in 2017, with acceleration on various mobile GPUs and DSPs, including the Pixel Visual Core on the Pixel 2. Google has shipped more on-device models so far, but neither company has a moat for on-device inference.

https://blog.google/products/pixel/pixel-visual-core-image-p...

weikju•4h ago
They are running data centers and offloading some things to chatGPT though, not just running on device.

In fact there’s no clear indication when Apple Intelligence is running on-device or in their Private Cloud Compute.

jfarina•4h ago
What strategy is that?
ryanmcgarvey•4h ago
I presume they mean that distribution is king and they make all the devices.
insane_dreamer•4h ago
As the father of a young child whose optic nerves are highly deteriorated (compression) and is expected to lose his sight (when exactly is unknown; based on original projections he should be blind by now, but an experimental treatment run in a trial at the NIH (KEEP FUNDING SCIENCE) has stabilized his sight), I'm overjoyed with the advances being made in VLMs. I can now envision a future where even if he loses his sight he'll be able to interact with the world around him, go to college, have a fulfilling career (he loves science and engineering, and is talented for his young age), etc.
lynx97•2h ago
I grew up in the 80s as a 100% blind child. Technology was by far not as advanced as today. Computers were just coming up when I was around 12. I learnt to type on a oldschool typewriter, and I also learnt to write braille with a pretty heavy full-metal embossing device. OCR was still quite bad. When I switched to what you call high scooll, I used a laptop with integrated Braille display to follow classes. Used good old DOS as OS and Word 5.5 as my "notepad". Except for PC Lingua for Latin, I basically had no tools specialized for learning. A electronic notepad and my brain was all I had to follow school. And I still made it. I have a great job I love, my own appartment, a sweet girlfriend and I am basically completely independent. To a point where I had to forcefully send away my mother since her continued attempts to "help" me were basically detrimental to my own development. I can not emphasis how important it is how you deal with it as a parent. Since parents are indeed the biggest hinderence to development, we have a saying around here amongst disabled people: "additional disability due to parental overprotection" (Zusatzbehinderung Eltern). Please take a moment to understand what this means, without feeling personally attacked. Its important. Your child can leave home around 18, just like every other kid. I did. Don't slow that process down artificially. The more this is prolonged, the harder it gets for the individual to actually obtain independence.

I am telling you this because I read between the lines that you believe current technology is a reason for you to be hopeful. Sure, it should be. But never forget, your child can do much more then you as a sighted person will ever be able to understand. Don't let them drown in your own misery. Let them discover what they can do. You will be surprised what they come up with. And dont fall for Gear Acquision Syndrome. Sure, tools are nice, and they do get better, which is also nice. I LOVE vision models, to stay on topic somehow. However, I still leave my house with only a cane and my phone in my pocket. I do occasionally ask Siri "Where am I" to get an address if I happen to have forgotten where I am exactly, currently. But at the end of the day, my cane is what shows me the way. Most tech is hype, plain old hearing and your sense of touch gets you much farther then you might think.

Wish you all the best for your own journey, and the development of your child.

wiz21c•1h ago
I should read a comment like yours every morning.
topato•1h ago
Wow, this really adds an amazing perspective to the entire (frequently touted) concept of Visual Language Models somehow "saving" blind people from their old life; In the past, a blind person desperately needed caretakers, otherwise the blind person will bumble around their home, end up mistaking the sink for the toilet, accidentally turn on their stove thinking it's the thermostat, until they died after mistaking bleach for milk and cat litter for cereal....

BUT NOW... THE FUTURE IS HERE.... an all-knowing god-like cell phone can tell these poor miserable individuals what the objects in their own homes are! No more tragic Mr. Magoo-ian accidents!

But thank you for posting this; It certainly enlightened me! I'll admit, all these AI solutions

liamwire•4h ago
It feels like this is the required level of speed-up needed re. time-to-first-token to make continuous vision useful for on-device applications like an assistant that can see and take action on your screen, ala the original Apple Intelligence demos. It’s very impressive seeing the app in the repo and I’m excited to build it tonight and play around.
nine_k•4h ago
With that, a really helpful aid for blind people can be made, running just on their phone, fed from a camera in their eyeglasses. Somebody who could not move around without an assistant could become autonomous in daily life.
adamsiem•4h ago
Anyone using vision to parse screenshots? QVQ was too slow. Will give this a shot.
abrichr•3h ago
You might be interested in https://github.com/OpenAdaptAI/OpenAdapt
logankeenan•3h ago
I used molmo to parse screenshots in order to detect locations of UI elements. See the repo below. I think Omni parser from Microsoft would also work well.

https://github.com/logankeenan/george

https://github.com/microsoft/OmniParser

nprateem•3h ago
OMG Apple finally managed to hire an AI researcher.
Aeroi•3h ago
I built/building a realtime voice+vision app called Sen, its currently live in beta and streams frames over webrtc. It's fast and smart, but Im super curious to see how these models do as we get closer to the metal. I can see these running on-device in the future with super fast ttfb.
keyle•3h ago
Do you have a write up of the tech stack and setup? Or willing to give the gist here?

I'd like to make a private Qwen or similar for my kids to prompt with a button and voice control. It doesn't need vision... Although eventually that'd be very cool.

Siri just sucks.

We might not be there yet...

Aeroi•3h ago
yeah i made a post on here, but the algo sent it to the gulag abyss.

https://news.ycombinator.com/item?id=43926673

keyle•2h ago
That's a good product site but it doesn't help me in anyway...
Aeroi•2h ago
I also ran across an interesting robot toy demo today that had voice built in. it was whimsical and seemed like it was aimed towards primary education and kids. Someone here might know the name.
nikolayasdf123•3h ago
2GB for 0.5B smallest model. it does not make sense for each app to download this. apple must have plans to pre-load these models on os level and expose SDK for all apps to call these models locally. exciting times!

opened issue for them to confirm this: https://github.com/apple/ml-fastvlm/issues/7

nikolayasdf123•3h ago
google and cloud LLM providers must be biting their teeth now! haha
nikolayasdf123•3h ago
distributing this heavy compute and moving it close to device where 1. source of data happens; 2. decision and output about the result of analysis is done; is way to go. super low latency, no network traffic, privacy, less overhead in cloud. this is amazing
porphyra•2h ago
It seems that the future of robotics is VLA models. Even Tesla FSD is an end-to-end VLA model. Efficient vision encoding will be a huge part of making robots safe and responsive.
lynx97•2h ago
I wonder, can I convert/run this with llama.cpp? It being LLaVA based seems promising.