frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
1•Anon84•25s ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•1m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•3m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•10m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•11m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•16m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
2•mooreds•16m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•18m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•19m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•23m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•25m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•25m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•26m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•28m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•28m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•29m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•29m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•34m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
4•dragandj•36m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•37m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•38m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•39m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•39m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•41m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•42m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•42m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•43m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•44m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•46m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•46m ago•0 comments
Open in hackernews

Moondream 3 Preview: Frontier-level reasoning at a blazing speed

https://moondream.ai/blog/moondream-3-preview
286•kristianp•4mo ago

Comments

Aeolun•4mo ago
That’s actually kinda impressive for an 8b model. Normally my experience with them is that they’re not really useful.
conwayanderson•4mo ago
Only 2b active also - very fast
lawlessone•4mo ago
Can run it on a phone then?

Seems like it could be somewhat useful for people with poor eyesight or blindness

conwayanderson•4mo ago
In terms of size yes, but I think it needs some work to get the model in the right format

couple people got it running on a raspberry pi though

apwell23•4mo ago
sorry what does it mean for only 2b to be active?
simonw•4mo ago
My understanding is that, while all 8B are loaded into memory, for each token inference step only 2B are selected and used - so tokens are produced faster because there is less computation needed.

Hoping someone will correct me if that's not the right mental model!

derac•4mo ago
I tried it out on their website and it seems pretty legit, it gets stuff wrong but so do all the vision models in my experience
conwayanderson•4mo ago
Especially good at detection & pointing cases - especially since the bigger models aren't good at localization
scoots_k•4mo ago
Moondream 2 has been very useful for me: I've been using it to automatically label object detection datasets for novel classes and distill an orders of magnitude smaller but similarly accurate CNN.

One oddity is that I haven't seen the claimed improvements beyond the 2025-01-09 tag - subsequent releases improve recall but degrade precision pretty significantly. It'd be amazing if object detection VLMs like this reported class confidences to better address this issue. That said, having a dedicated object detection API is very nice and absent from other models/wrappers AFAIK.

Looking forward to Moondream 3 post-inference optimizations. Congrats to the team. The founder Vik is a great follow on X if that's your thing.

conwayanderson•4mo ago
Also used it for auto-labeling - it's crazy good for that
radq•4mo ago
Thanks! If you could shoot me a note at vik@m87.ai with any examples of the precision/recall issues you saw I'd appreciate it a ton.
scoots_k•4mo ago
Will do!
nstj•4mo ago
Wonderful to see "at the coalface" collaboration happen on this stuff at HN. More than just a newsfeed!
buyucu•4mo ago
are you planning to release a GGUF?
sheepscreek•4mo ago
Impressive stuff! Has anyone tried it for computer/browser control? How does it fare with graphs and charts?
radq•4mo ago
The 'point' skill is trained on a ton of UI data; we've heard of a lot of people using it in combination with a bigger driver model for UI automation. We are also planning on post-training it to work end-to-end for this in an agentic setting before the final release -- this was one of the main reasons we increased the model's context length.

Re: chart understanding, there are a lot of different types of charts out there but it does fairly well! We posted benchmarks for ChartQA in the blog but it's on par with GPT5* and slightly better than Gemini 2.5 Flash.

* To be fair to GPT5, it's going to work well on many more types of charts/graphs than Moondream. To be fair to Moondream, GPT5 isn't really well suited to deploy in a lot of vision AI applications due to cost/latency.

bobdyl87•4mo ago
Im labeling a dataset with it. We’ll see how it turns out
bobdyl87•4mo ago
Pretty good so far. Have 100,000 detections
stephenbuilds•4mo ago
Using moondream2 at paper.design to describe user uploaded images (for automatic labels in the layer tree). It's incredible, super fast and accurate. Excited to try out 3 :)
robertdaniels•4mo ago
It's ability to process large volumes of images with low active parameters makes it a significant advancement for edge devices. However, scaling these models to production environments often introduces security challenges, including bot floods targeting inference APIs and adversarial inputs that mimic legitimate queries to disrupt detections.
Onavo•4mo ago
How does it perform against the new Qwen3-VL model?
kache_•4mo ago
it's honestly really good. Big fan of that team, they are really practical and have been producing really useful software and sharing all their learnings online.
buyucu•4mo ago
Is there a GGUF?
liqilin1567•4mo ago
Tried it's detection out on the playground as a 9B model it's pretty good.
Imanari•4mo ago
So... it should be really good at ARC?
pzo•4mo ago
Would be interesting to see how it scores on COCO or Object356 dataset object detection (even if I know will be slower than dedicated object detection model)
bluelightning2k•4mo ago
Spent 5 minutes trying to get basic pricing info for Moondream cloud. Seems it simply does not exist (or at least not until you've actually signed up?). There's 5,000 free requests but I need to sense-check the pricing as viable as step 0 of evaluating - long before hooking it up to an app.
civilchaos•4mo ago
We are looking to launch our cloud very soon. We are still optimizing our inference to get you the best pricing we can offer. Follow @moondreamai on X if you want your ear to the ground for our launch!
aitchnyu•4mo ago
Will you add this to OpenRouter too?
nicohayes•4mo ago
The MoE architecture choice here is particularly interesting - the ability to keep only 2B parameters active while maintaining 8B model performance is a game-changer for edge deployment. I've been deploying vision models in production environments where latency is critical, and this sparse activation approach could solve the inference cost problem that's been limiting adoption of larger VLMs. The chart understanding capabilities mentioned look promising for automated document analysis workflows. Has anyone tested the model's consistency across different image qualities or lighting conditions? That's often where smaller models struggle compared to frontier ones.
simonw•4mo ago
This looks amazing. I'm a big fan of Gemini for bounding box operations, the idea that a 9B model could outperform it is incredibly exciting!

I noticed that Moondream 2 was Apache 2 licensed but the 3 preview is currently BSL ("You can’t (without a deal): offer the model’s functionality to anyone outside your organization—e.g., an external API, or managed hosting for customers") - is that a permanent change to your licensing policies?

simonw•4mo ago
I just noticed in https://huggingface.co/moondream/moondream3-preview/blob/mai... that the license is set to change to Apache 2 after two years.
nicohayes•4mo ago
Could you clarify whether the 2B active parameter concept refers to per-token inference and how this scales with context length? Specifically how MoE affects activation during inference and any practical implications for latency.
ZeroCool2u•4mo ago
Really impressive performance from the Moondream model, but looking at the results from the big 3 labs, it's absolutely wild how poorly Claude and OpenAI perform. Gemini isn't as good as Moondream, but it's clearly the only one that's even half way decent at these vision tasks. I didn't realize how big a performance gap there was.
ekidd•4mo ago
Gemini is really fantastic at anything that's OCR-adjacent, and it promptly falls over on most other image-related tasks.
Jackson__•4mo ago
Funnily enough, Gemini is also the only one able to read a D20. ChatGPT consistently gets it wrong, and Claude mostly argues it can't read the face of the die that's facing up because it's obstructed (it's not lol).
KronisLV•4mo ago
I'm not sure why they haven't been acquired yet by any of the big ones, since clearly Moondream is pretty good! Definitely seems like something Anthropic/OpenAI/whoever would want to fold into their platforms and such. Everyone involved in creating it should probably be swimming in money and visual use cases for LLMs should become far less useless with the reach of the big orgs.
thw_9a83c•4mo ago
Can anyone suggest what's the cheapest hardware to run this model locally with a reasonable performance?
daemonologist•4mo ago
Since there's no quantized version available at the moment, you'll need ~20 GB of memory for the weights plus some extra for the KV cache. CPU with 32 GB RAM will be the cheapest and still reasonably fast given the relatively small number of activated parameters.
thw_9a83c•4mo ago
Thank you!

I don't even know what a "quantized version" is, but I was expecting answers about NVIDIA graphics cards and their memory. My computer has 24GB of memory, but I'll go for 64GB to run this locally on a new computer.

tensorlibb•4mo ago
Any examples of face tracking a video and exporting mask coordinates to apply a filter in ffmpeg