frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
94•theblazehen•2d ago•22 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•189 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•549 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
47•videotopia•4d ago•1 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•17 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
227•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•111 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
327•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
486•todsacerdoti•21h ago•239 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
283•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•275 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•3h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
3•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
250•i5heu•16h ago•193 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•442 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
143•SerCe•9h ago•132 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•12 comments
Open in hackernews

Google AI Edge – On-device cross-platform AI deployment

https://ai.google.dev/edge
233•nreece•8mo ago

Comments

davedx•8mo ago
More information here: https://ai.google.dev/edge/mediapipe/solutions/guide

(It seems to be open source: https://github.com/google-ai-edge/mediapipe)

I think this is a unified way of deploying AI models that actually run on-device ("edge"). I guess a sort of "JavaScript of AI stacks"? I wonder who the target audience is for this technology?

wongarsu•8mo ago
Some of the mediapipe models are nice, but mediapipe has been around forever (or 2019). It has always been about running AI on the edge, back when the exciting frontier of AI were visual tasks.

For stuff like face tracking it's still useful, but for some other tasks like image recognition the world has changed drastically

babl-yc•8mo ago
I would say the target audience is anyone deploying ML models cross-platform, specifically ones that would require supporting code beyond the TFLite runtime to make it work.

LLMs and computer vision tasks are good examples of this.

For example, a hand-gesture recognizer might require: - Pre-processing of input image to certain color space + image size - Copy of image to GPU memory - Run of object detection TFLite model to detect hand - Resize of output image - Run of gesture recognition TFLite model to detect gesture - Post processing of gesture output to something useful

Shipping this to iOS+Android requires a lot of code beyond executing TFLite models.

The Google Mediapipe approach is to package this graph pipeline, and shared processing "nodes" into a single C++ library where you can pick and choose what you need and re-use operations across tasks. The library also compiles cross-platform and the supporting tasks can offer GPU acceleration options.

One internal debate Google likely had was whether it was best to extend TFLite runtime with these features, or to build a separate library (Mediapipe). TFLite already supports custom compile options with additional operations.

My guess is they thought it was best to keep TFLite focused on "tensor based computation" tasks and offload broader operations like LLM and image processing into a separate library.

yeldarb•8mo ago
Is this a new product or a marketing page tying together a bunch of the existing MediaPipe stuff into a narrative?

Got really excited then realized I couldn’t figure out what “Google AI Edge” actually _is_.

Edit: I think it’s largely a rebrand of this from a couple years ago: https://developers.googleblog.com/en/introducing-mediapipe-s...

rvnx•8mo ago
Make your own opinion here: https://mediapipe-studio.webapps.google.com/studio/demo/imag...

Go to this page using your mobile phone.

I am apparently a doormat or a seatbelt.

It seems to be a rebranded failure. At Google you get promoted for product launches because of the OKRs system and more rarely for maintenance.

tfsh•8mo ago
Perhaps you missed the associated documentation? This is a classification tool which requires input labels "uses an EfficientNet architecture and was trained using ImageNet to recognize 1,000 classes, such as trees, animals, food, vehicles".

The full list [1] doesn't seem to include a human. You can tweak the score threshold to reduce false positives.

1: https://storage.googleapis.com/mediapipe-tasks/image_classif...

rvnx•8mo ago
You're right about human, that would explain it, but still I find it surprising that such "common item" as a human is not there.

Did you also try on items from the list ?

If there is a match (and this is not frequent), to me it's still very low confidence (like noise or luck).

It seems to be a repacking of https://blog.tensorflow.org/2020/03/higher-accuracy-on-visio...

So an old release from 5 years ago (like very long time in AI-world), and AFAIK it has been superseded by YOLO-NAS and other models. MediaPipe feels really old tool, except for some specific subtasks like face tracking.

And as a side-note, the OKR-system at Google is a very serious thing, there are lot of people internally gaming the system, and that could explain why it is a "new" launch, instead of a rather disappointing rebrand of the 2020-version.

I'd rather recommend building on more modern tools, such as: https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Ins... (runs on iPhone with < 1GB of memory)

bigyabai•8mo ago
> And as a side-note, the OKR-system at Google is a very serious thing, there are lot of people internally gaming the system.

So you came here to offer a knee-jerk assessment of an AI runtime and blamed the failure on OKRs. Then somebody points out that your use-case isn't covered by the model, and you're looping back around to the OKR topic again. To assess an AI inference tool.

Why would you even bother hitting reply on this post if you don't want to talk about the actual topic being discussed? "Agile bad" is not a constructive or novel comment.

danielb123•8mo ago
Years behind what is already available through frameworks like CoreML and TimyML. Plus Google has to first prove they won't kill the product to meet the next quarterly investor expectations.
spacecadet•8mo ago
Its just a rebranded tensorflow lite, Ive been using that on edge devices since 2019... CoreML is great too!
babl-yc•8mo ago
This isn't really true. They are different offerings.

CoreML is specific to the Apple ecosystem and lets you convert a PyTorch model to a CoreML .mlmodel that will run with acceleration on iOS/Mac.

Google Mediapipe is a giant C++ library for running ML flows on any device (iOS/Android/Web). It includes Tensorflow Lite (now LiteRT) but is also a graph processor that helps with common ML preprocessing tasks like image resizing, annotating, etc.

Google killing products early is a good meme but Mediapipe is open source so you can at least credit them with that. https://github.com/google-ai-edge/mediapipe

I used a fork of Mediapipe for a contract iOS/Android computer vision product and it was very complex but worked well. A cross-platform solution would not have been possible with CoreML.

NetOpWibby•8mo ago
I wish MediaPipe was good for facial AR but in my experience it’s lacking.
bigyabai•8mo ago
My brother in Christ, CoreML only exists because Apple saw Tensorflow and wanted the featureset without cooperating on a common standard. TF was like 2 years old (and fairly successful) by the point CoreML was announced. To this day CoreML is little more than a proprietary BLAS interface, with nearly zero industry buy-in.

Terrifying what being an iOS dev does to a feller.

elpakal•8mo ago
The generative AI piece is not available in Apple ecosystems right? I think that would be huge and I really hope Apple gives us something similar. And I gotta say the chat piece of this seems really useful too.

Also where the f is Swift Assist already

mattnewton•8mo ago
Tensorflow light has been battle tested on literal billions of devices over the years and this looks like a rebrand/extension of that plus media pipe, one of the biggest users of it. Google has been serious about on device ML for over 5 years now, I don't think they are going to kill this. Confusingly rebrand it maybe :)
coderatlarge•8mo ago
is it possible to go to the iPhone app store and get an app that is essentially an ollama like model downloader and launcher?
zb3•8mo ago
So can we run Gemma 3n on linux? So much fluff yet this is unclear to me.
quaintdev•8mo ago
As far as I know it's based on Gemini nano architecture which exclusively runs on Android and Chrome. So I'm guessing you can't run it on Linux outside Chrome.
saratogacx•8mo ago
In the model's community section Goog confirms they're working on a gguf version so you can host it like most other models.

https://huggingface.co/google/gemma-3n-E4B-it-litert-preview...

ricardobeat•8mo ago
This is a repackaging of TensorFlow Lite + MediaPipe under a new “brand”.
echelon•8mo ago
The same stuff that powers this?

https://3d.kalidoface.com/

It's pretty impressive that this runs on-device. It's better than a lot of commercial mocap offerings.

AND this was marked deprecated/unsupported over 3 years ago despite the fact it's a pretty mature solution.

Google has been sleeping on their tech or not evangelizing it enough.

hatmanstack•8mo ago
Played with this a bit and from what I gathered it's purely a re-arch of pytorch models to work as .tflite models, at least that's what I was using it for. It worked well with a custom finbert model with negligible size reduction. It converted a quantized version but outputs were not close. From what I remember of the docs it was created for standard pytorch models, like "torchvision.models", so maybe with those you'd have better luck. Granted, this was all ~12 months ago, sounds like I might have dodged a pack of Raptors?
stanleykm•8mo ago
i really wish people who make edge inference libraries like this would quit rebranding them every year and just build the damn things to be fast and small and consistently updated.
bigyabai•8mo ago
ONNX exists but since they don't change their name very often not a whole lot of people know about it.
pzo•8mo ago
ONNXRuntime is actually quite popular mostly because Hugging Face transformers - many people just don't know they using it under the hood. What is missing is transformers native so you can easily deploy it not only on desktops and servers. Transformers.js is some kind of attempt - can deploy on Web and React Native.
arbayi•8mo ago
https://github.com/google-ai-edge/gallery

A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.

roflcopter69•8mo ago
Genuine question, why should I use this to deploy models on the edge instead of executorch? https://github.com/pytorch/executorch

For context, I get to choose the tech stack for a greenfield project. I think that executor h, which belongs to the pytorch ecosystem, will have a way more predictable future than anything Google does, so I currently consider executorch more.

6gvONxR4sf7o•8mo ago
For one thing, executorch is currently full of sharp edges. No idea about this, but i had bad experience with ET. If i were starting over, I might start from torch.fx and automate from there. fx is stable and should be around for a while.
salamo•8mo ago
Really happy to see additional solutions for on-device ML.

That said, I probably wouldn't use this unless mine was one of the specific use cases supported[0]. I have no idea how hard it would be to add a new model supporting arbitrary inputs and outputs.

For running inference cross-device I have used Onnx, which is low-level enough to support whatever weights I need. For a good number of tasks you can also use transformers.js which wraps onnx and handles things like decoding (unless you really enjoy implementing beam search on your own). I believe an equivalent link to the above would be [1] which is just much more comprehensive.

[0] https://ai.google.dev/edge/mediapipe/solutions/guide

[1] https://github.com/huggingface/transformers.js-examples

6gvONxR4sf7o•8mo ago
Anybody have any experience with this? I just spend a while contorting a custom pytorch model to get it to export to coreml and it was full of this that and the other not being supported, or segfaulting, and all sorts of silly errors. I'd love if someone could say this isn't full of sharp edges too.
smeej•8mo ago
I got it all set up and tested out Gemma3 1B on a Pixel 8a. That only took a few minutes, which was nice.

But it was garbage. It barely parsed the question, didn't even attempt to answer it, and replied in what was barely serviceable English. All I asked was how it was small enough to run locally on my phone. It was bad enough for me to abandon the model entirely, which is saying a lot, because I feel like I have pretty low expectations for AI work in the first place.

throwaway314155•8mo ago
> All I asked was how it was small enough to run locally on my phone

Bit off-topic, but did you expect to see a real or honest answer about itself? I see many people under the impression that models know information about themselves that isn't in the system prompt. Couldn't be further from the truth. In face, those questions specifically lead to hallucinations more often resulting in an overconfident assertion with a "reasonable" answer.

The information the model knows (offline - no tools allowed) stops weeks if not months if not years prior to when the model is done training. There is _zero_ information about its inception, how it works, or anything similar in its weights.

Sorry, this is mostly directed at the masses - not you.

smeej•8mo ago
Not really, but I did expect an answer, or at least a non-answer, that showed it understood the question, and that an answer was expected.
DrSiemer•8mo ago
Why would you ask a 1B model anything? Those are only useful for rephrasing output at best.
smeej•8mo ago
...because they advertise its potential for on-device usage for basic functionality in this very product and I wanted to see how it worked? I'm not sure what you want MW to say. I tested the product because I wanted to know if the product worked. That was the extent of it.
suilk•8mo ago
How about the MNN engine?
synergy20•8mo ago
Can this run on customized embedded devices? or just for phones.
synergy20•8mo ago
it does support python and web, and runs on raspberry pi.
init0•8mo ago
This can be done with WebLLM, no?
pzo•8mo ago
My take: tensorflow lite + mediapipe was great but google really neglected it in the last 3 years or so. Mediapipe didn't have many meaningful update in last 3 years. A lot of models today are outdated or slow. TF Lite supported NPU (like apple ANU) but mediapipe never did. They had also too much mess with different branding: MLKit, Firebase ML, TF lite, LiteRT.

This days probably better to stick with onnxruntime via hugging face transformers or transformers.js library or wait until executorch mature. I haven't seen any SOTA model officially released having official port to tensorflow lite / liteRT for a long time: SAM2, EfficientSAM, EdgeSAM, DFINE, DEIM, Whisper, Lite-Whisper, Kokoro, DepthAnythingV2 - everything is pytorch by default but with still big communities for ONNX and MLX

rs186•8mo ago
LOL when you realize that Google wants you to download the apk and sideload it instead of installing from the Play Store ( https://github.com/google-ai-edge/gallery#-get-started-in-mi... )

You know how terrible the store and the publishing process are -- their own people don't even use it.

bcraven•8mo ago
Presumably because this an "experimental Alpha release".

Beta releases are possibly on the Play Store, but it sounds like they're not ready yet.

dingody•8mo ago
I keep seeing people talk a lot about edge AI — just curious, aside from those experimental or toy projects, are there any real killer use cases out there?