frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•5m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•7m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•8m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•9m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•11m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•12m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•16m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•18m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•18m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•19m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•21m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•24m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•27m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•33m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•34m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•40m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•41m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•42m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•44m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•46m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•47m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•49m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•52m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•53m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•56m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•57m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•57m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•58m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

Web Translator API

https://developer.mozilla.org/en-US/docs/Web/API/Translator
97•kozika•7mo ago

Comments

rhabarba•7mo ago
You had me at "Browser compatibility".
Raed667•7mo ago
Chrome embeds a small LLM (never stops being a funny thing) in the browser allowing them to do local translations.

I assume every browser will do the same as on-device models start becoming more useful.

rhabarba•7mo ago
While I appreciate the on-device approach for a couple of reasons, it is rather ironic that Mozilla needs to document that for them.
its-summertime•7mo ago
Firefox also has on-device translations for what its worth.
Asraelite•7mo ago
What's the easiest way to get this functionality outside of the browser, e.g. as a CLI tool?

Last time I looked I wasn't able to find any easy to run models that supported more than a handful of languages.

ukuina•7mo ago
ollama run gemma3:1b

https://ollama.com/library/gemma3

> support for over 140 languages

diggan•7mo ago
Try to translate a paragraph with 1b gemma and compare it to DeepL :) Still amazing it can understand anything at all at that scale, but can't really rely on it for much tbh
_1•7mo ago
If you need to support several languages, you're going to have to have a zoo of models. Small ones just can't handle that many; and they especially aren't good enough for distribution, we only use them for understanding.
JimDabell•7mo ago
That depends on what counts as “a handful of languages” for you.

You can use llm for this fairly easily:

    uv tool install llm

    # Set up your model however you like. For instance:
    llm install llm-ollama
    ollama pull mistral-small3.2

    llm --model mistral-small3.2 --system "Translate to English, no other output" --save english
    alias english="llm --template english"

    english "Bonjour"
    english "Hola"
    english "Γειά σου"
    english "你好"
    cat some_file.txt | english
https://llm.datasette.io
usagisushi•7mo ago
Tip: You might want to use `uv tool install llm --with llm-ollama`.

ref: https://github.com/simonw/llm/issues/575

JimDabell•7mo ago
Thanks!
jan_Sate•7mo ago
That's just the base/stock/instruct model for general use case. There gotta be a finetune specialized in translation, right? Any recommendations for that?

Plus, mistral-small3.2 has too many parameters. Not all devices can run it fast. That probably isn't the exact translation model being used by Chrome.

JimDabell•7mo ago
I haven’t tried it myself, but NLLB-200 has various sizes going down to 600M params:

https://github.com/facebookresearch/fairseq/tree/nllb/

If running locally is too difficult, you can use llm to access hosted models too.

wittjeff•7mo ago
https://ai.meta.com/blog/nllb-200-high-quality-machine-trans... https://www.youtube.com/watch?v=AGgzRE3TlvU
deivid•7mo ago
You can use bergamot ( https://github.com/browsermt/bergamot-translator ) with Mozilla's models ( https://github.com/mozilla/firefox-translations-models ).

Not the easiest, but easy enough (requires building).

I used these two projects to build an on-device translator for Android.

mftrhu•7mo ago
Setting aside general-purpose LLMs, there exist a handful of models geared towards translation between hundred of language pairs: Meta's NLLB-200 [0] and M2M-100 [1] can be run using HuggingFace's transformers (plus numpy and sentencepieces), while Google's MADLAD-400 [2], in GGUF format [3], is also supported by llama.cpp.

You could also look into Argos Translate, or just use the same models as Firefox through kotki [4].

[0] https://huggingface.co/facebook/nllb-200-distilled-600M [1] https://huggingface.co/facebook/m2m100_418M [2] https://huggingface.co/google/madlad400-3b-mt [3] https://huggingface.co/models?other=base_model:quantized:goo... [4] https://github.com/kroketio/kotki

tempodox•7mo ago
What compatibility? It's Chrome-only.
troupo•7mo ago
While this might be useful, be mindful:

- it's experimental

- the "specification" is nowhere near a standards track: https://webmachinelearning.github.io/translation-api/

Of course it's already shipped in Chrome, and now Chrome pretends that its own Chrome-only API is somehow standard. Expect people on HN to blame other browsers for not shipping this.

jazzypants•7mo ago
I've been pleasantly surprised by the last few conversations about this type of thing that I've seen. It seems like people are pretty sick of Chrome's IE proclivities.
moron4hire•7mo ago
This is the W3C standardization process.

The W3C is not a prescriptive standardization body. It doesn't have any regulatory power giving it any teeth to go after vendors acting in bad faith. So the W3C process is descriptive and encourages a period of competitive divergence in implementations. It is only after the early adopters have hammered on the features and figured out which parts they like best that a Web API can then start to get standardized.

troupo•7mo ago
> This is the W3C standardization process.

Let me quote the site for you

--- start quote ---

This specification was published by the Web Machine Learning Community Group. It is not a W3C Standard nor is it on the W3C Standards Track.

--- end quote ---

> So the W3C process is descriptive and encourages a period of competitive divergence in implementations.

That is exactly opposite of how the w3c standardization process works

> It is only after the early adopters have hammered on the features and figured out which parts they like best that a Web API can then start to get standardized.

Yes, and until then this work is not supposed to be enabled by default

moron4hire•7mo ago
You're quoting from their literal, W3C-format working draft, quoting the name of the W3C working group that has been formed to standardized this.

Being "standards track" means the spec is out of draft and has been proposed. It does not mean "we intend to standardize this". It means, "we've put in all of the work to standardize this and are waiting on final acceptance".

I don't know what you mean by "isn't supposed to be enabled by default". There is no mention of when browser vendors may or may not ship features in the standardization process.

troupo•7mo ago
> You're quoting from their literal, W3C-format working draft, quoting the name of the W3C working group that has been formed to standardized this.

The literal "Draft Community Group Report" (and not a working draft) is a literal link to w3c standardization process: https://www.w3.org/standards/types/#CG-DRAFT

Since the words "not on the W3C Standards Track" from the document didn't persuade you, you could go to the actual w3c process and answer a few simple questions:

- is "Draft Community Group Report" a document on a standards track?

- what does it take to get on the standards track?

- what does it take to "put in all of the work to standardize this and wait on final acceptance", and how many steps there are between "Draft Community Group Report" and this stage?

> I don't know what you mean by "isn't supposed to be enabled by default".

For a person who is so confidently talking about the w3c standards process, I'm surprised you don't.

w3c doesn't explicitly state this. Except for the final few stages, all steps in the process contain the following: "Software MAY implement these specifications at their own risk but implementation feedback is encouraged."

However.

Since this is browsers we're talking about, it means that whatever browsers ship enabled by default will remain in the wild forever because people will immediately start depending on that implementation.

Additionally, a standard cannot become a standard until there are at least two independent implementations of a proposed feature. This is to eliminate the possibility to ship purely internal APIs, or depend on a single library/implementor.

So the way to do it, especially for APIs that are nowhere close to being "waiting for final acceptance" is: ship behind a flag, iron out issues and differences, perhaps change the API (and changes to API happen all the time), then ship.

Of course, Chrome shits all over this process and just ships whatever it wants to ship.

moron4hire•7mo ago
> However.

No "however". The W3C is not actually a standards body. They don't get to tell people what to do. The W3C even knows this, even though we all colloquially call these things "standards", they're actually just "Recommendations".

The FCC is a standards body. NIST is a standards body. You go against them, you get fined. Messing with weights and measures is one of two crimes defined in the US Constitution, they both come with the death penalty, and the other one is treason.

That's not the W3C. You go against the W3C, worst case scenario, Apple says, "nah, we ain't gonna do that" and then Web devs don't adopt your feature because they can't run it on one of the biggest platforms: Safari on iOS, the only browser allowed to run on iOS.

What you've just pointed out is the W3C explicitly saying it doesn't mind if browser vendors implement features early. They "MAY" do it. And then they remind everyone the risk is on the vendor if the eventual sta... excuse me, "recommendation", diverges from what does eventually get standardized.

troupo•7mo ago
> No "however". The W3C is not actually a standards body.

Oh look. You've stopped claiming that this spec is on the standards track, that the final step is just final acceptance, or that there's an actual standardisation process.

You've now switched to saying that this is not an actual standardization process and to pretending that I said something I never did: that w3c enforces standards.

Imagine if you actually knew anything about what you were talking about and argued in good faith.

Adieu.

nachomg•7mo ago
This gives strong IE vibes.
lynx97•7mo ago
Can we please NOT autotranslate the web? I have yet to find a site where the quality of autotranslate does not make me stop using that site. I was already irritated when google started to show me de.wikipedia.org articles adespite me explicitly searching for the english article name. Then came Etsy, where the autotranslate quality was so bad I stopped using the site altogether.
diggan•7mo ago
The good news is that if the browsers offered this natively, websites wouldn't need their own implementation of this. And if it's in the client (the browser), you're most likely gonna be able to turn it off globally, just like how you like it.

Worst case scenario a user-script/extension could monkey patch it out, but probably clients will let you disable it.

sandstrom•7mo ago
This is not auto-translation.

Rather, it's an API developers can use to add inline translation to web apps.

For example, under a comment in your app, you can (a) detect the language, and (b) if it's different from the current users/browsers language, offer to translate it with a small link (c) if the user clicks the link, the content is translated to their language.

lofaszvanitt•7mo ago
But Reddit already does it! It's a new form of cultural colonisation by a headless society.
RockRobotRock•7mo ago
https://github.com/mozilla/standards-positions/issues/1015
sandstrom•7mo ago
I honestly don't understand the arguments Mozilla have against it.

Safari/webkit is positive (though no official stance yet):

https://github.com/WebKit/standards-positions/issues/339#iss...

yjftsjthsd-h•7mo ago
I don't know enough to understand the DOM argument, but

> The spec assumes a certain form of translation backend, exposing information about model availability, download progress, quotas, and usage prediction. We'd like to minimize the information exposure so that the implementation can be more flexible.

reads to me as Chrome once again trying to export itself verbatim as a "standard" and Mozilla pointing out that that's not really applicable to others.

Also the WebKit post seems to raise somewhat similar arguments but on the basis of fingerprinting/privacy problems.

dveditz_•7mo ago
The "exposing information about..." bit in the Mozilla statement is fingerprinting/privacy argument like WebKit's
yjftsjthsd-h•7mo ago
Maybe? I read that as more of a compatibility thing; if sites depend on information that Chrome exposes, then it's easy for them to have bugs on browsers that don't expose the exact same information (possibly by way of that information not even existing or making sense for a different implementation).
sfmz•7mo ago
https://developer.chrome.com/docs/ai/translator-api

const translator = await Translator.create({ sourceLanguage: 'en', targetLanguage: 'fr', });

await translator.translate('Where is the next bus stop, please?');

ks2048•7mo ago
So, this is Google Translate built running locally in Chrome? I wonder if it is a small/degraded model or limited languages? Otherwise, how is it not a simple way around the paid Google API?
sfmz•7mo ago
There's already ways to do translation locally in javascript with neural-nets running in WASM, this is just more convenient.

https://huggingface.co/Xenova/nllb-200-distilled-600M

vitonsky•7mo ago
I tried to use this model in my package with translators kit https://github.com/translate-tools/core/pull/112

It runs very slow. Test case that run translation for text in 3k chars multiple times, takes about 30 seconds for google translator, but more than 10 minutes for `nllb-200-distilled-600M`.

Text sample: https://github.com/translate-tools/core/pull/112/files#diff-...

My tests runs on nodejs, it looks in browser it have no chance for real world use

ameliaquining•7mo ago
The article explains that this feature uses a small (up to 22 GB) language model that runs on-device.

That said, the "simple way around the paid API" problem is something Google has to deal with anyway, because there are a bunch of ways to use Google Translate without paying for it (e.g., the translate.google.com web UI, or the non-JavaScript-exposed "Translate to [language]" feature built into Chrome), and any action that can be taken by a human can in principle also be taken by a script. The only thing to do about it is use secret-sauce server-side abuse detection to block as much of this activity as they can; they can't get all of it but they can get enough to push enough people onto the paid API that the economics pencil out.

jannes•7mo ago
So installing Chrome is going to require 22 GB of disk space now?
ks2048•7mo ago
It only installs models that explicitly downloaded via this API, it seems.

Also, it says to have 22 GB free, but below (under "Note"...), it says the model takes "around a couple of GB".

cj•7mo ago
Does the API trigger the download automatically, or does it ask for user permission?

(Answered my own question): Doesn't look like it requires the user's permission. Upon first use, the model will start downloading. The user has to wait for the download to finish before the API will work. That could take hours for 22gb.

I presume this can't work on mobile?

https://developer.mozilla.org/en-US/docs/Web/API/Translator_...

ameliaquining•7mo ago
The article indicates that it will only download the model over an unmetered connection, e.g., while the phone is connected to wifi.
djhn•7mo ago
Seems very backwards for markets where wired/wifi connections at home are nonexistant, 4g/5g is already unmetered and phones are the wifi you connect your devices to.
ameliaquining•7mo ago
If you have an unlimited data plan then Android counts that as an unmetered connection and Chrome will download the model over it.
dhx•7mo ago
This sounds off by an order of magnitude? Firefox's local translation models are only 20-70MB per language pair direction (e.g. en-to-fr or fr-to-en).[1] These models are also only released when they reach at least -5% of Google Translate's COMET score.[1] Currently Firefox ships with support for 32 xx-to-en language pairs and 29 en-to-xx language pairs.[1] As the number of language pairs increases, it probably isn't unreasonable for browsers to stop bundling every language pair and instead prompt users to download uncommon models the first time the user wants to use them.

[1] https://mozilla.github.io/translations/firefox-models/

akazantsev•7mo ago
Here is the information on how it works in Chrome. https://developer.chrome.com/docs/ai/translator-api
dbbk•7mo ago
Could it get more degraded?
sandstrom•7mo ago
This would be very useful.

Basically, the 'translate this' button you see on Twitter or Instagram next to comments in foreign languages. This API would make it trivial for all developers to add that to their web apps.

greatgib•7mo ago
Except that it is the user that will pay with his own llm tokens
cAtte_•7mo ago
how do you know this?
8n4vidtmkvmk•7mo ago
The user pays with some disk space, not API tokens
lofaszvanitt•7mo ago
Another useful feature that nobody could've replicated themselves.
indeyets•7mo ago
So, the browsers have to provide some means for choosing the desired translation engine (add-on API maybe?) and this is a standard API which all of the providers should implement.

right?

pwdisswordfishz•7mo ago
Why does it need to be a JavaScript API?

Why not just use the lang= attribute as it was intended, then let the user select text to translate as they wish?

tempodox•7mo ago
It's only implemented in Google Chrome, so go figure.
Uehreka•7mo ago
If Chrome tried to pull this in like 2016, when Google Translate was the only-ish game in town, I’d call them out for it. But we now have multiple competing open weights translation models that are really good, making this kind of service essentially a commodity. One vendor might give users free access to their services to entice them to use their browser, another might differentiate themself by running the model locally and giving the user better privacy guarantees in exchange for performance.

I get that this is one more brick in the wall that teams like LadyBird will have to maintain, but as a web developer I do think more Web API features is generally a good thing, as it makes it easier for smaller shops to implement richer functionality.

diggan•7mo ago
If it's a HTML attribute, then you can only use it with DOM elements, with no control about when it runs.

Instead, a JS API gives more flexibility and control.

Besides, I think the "lang" attribute is supposed to signal what the language of the text inside that element is, not what it could/should be. So even if going with attributes would be the way forward, a new one would need to be created.

minus7•7mo ago
I was excited that Firefox finally exposed its local translations as API, but it's Chrome-only (still?). Will be nice for userscripts, for example to replace Twitter's translation button that hardly ever works
troupo•7mo ago
> I was excited that Firefox finally exposed its local translations as API, but it's Chrome-only (still?).

Bacause it was, is, and will be Chrome-only for the forseeable future: https://news.ycombinator.com/item?id=44375326

vitonsky•7mo ago
As maintainer of https://linguister.io/, should I start work on polyfill for that API?

If this API will be implemented in next few years, there will be browsers who hold up the world in progress.

Linguist have enough many users, so we could expose this API for client side code, and users with browsers where Translation API is not implemented yet, could install Linguist and sites who uses Translation API would works fine. Translation API calls would proxy by Linguist and preferred translator module would be used.

Any thoughts about it?

seabass•7mo ago
With js being a garbage collected language, what is the benefit of the destroy method here and why is it necessary?
charcircuit•7mo ago
The is no guarantee when it will be garbage collected. For large local models that use a lot of resources they should be unloaded as soon as possible to allow other programs on the computer to use the resources.
mediumsmart•7mo ago
I am ok with Chronic translating the Italian version of a site back into the original German version living in the neighboring folder for good money.
eeee11•7mo ago
哈哈哈
eeee11•7mo ago
灌灌灌灌
pabs3•7mo ago
Wish this was a desktop API too, would be useful outside web browsers too, especially in email clients.