frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: I was curious about spherical helix, ended up making this visualization

https://visualrambling.space/moving-objects-in-3d/
504•damarberlari•7h ago•97 comments

Zedless: Zed fork focused on privacy and being local-first

https://github.com/zedless-editor/zed
263•homebrewer•2h ago•116 comments

Introduction to Bluesky's AT Protocol

https://mackuba.eu/2025/08/20/introduction-to-atproto/
55•psionides•2h ago•24 comments

Show HN: PlutoPrint – Generate Beautiful PDFs and PNGs from HTML with Python

https://github.com/plutoprint/plutoprint
13•sammycage•49m ago•2 comments

Gemma 3 270M re-implemented in pure PyTorch for local tinkering

https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/12_gemma3
250•ModelForge•7h ago•40 comments

An Update on Pytype

https://github.com/google/pytype
105•mxmlnkn•4h ago•36 comments

Coris (YC S22) Is Hiring

https://www.ycombinator.com/companies/coris/jobs/rqO40yy-ai-engineer
1•smaddali•27m ago

Visualizing GPT-OSS-20B embeddings

https://melonmars.github.io/LatentExplorer/embedding_viewer.html
32•melonmars•3d ago•16 comments

Pixel 10 Phones

https://blog.google/products/pixel/google-pixel-10-pro-xl/
258•gotmedium•4h ago•458 comments

Launch HN: Channel3 (YC S25) – A database of every product on the internet

64•glawrence13•5h ago•39 comments

Lean proof of Fermat's Last Theorem [pdf]

https://imperialcollegelondon.github.io/FLT/blueprint.pdf
36•ljlolel•3h ago•23 comments

OPA maintainers and Styra employees hired by Apple

https://blog.openpolicyagent.org/note-from-teemu-tim-and-torin-to-the-open-policy-agent-community-2dbbfe494371
94•crcsmnky•5h ago•38 comments

Gouach wants you to insert and pluck the cells from its Infinite e-bike battery

https://arstechnica.com/gadgets/2025/05/gouach-wants-you-to-insert-and-pluck-the-cells-from-its-infinite-e-bike-battery/
12•pabs3•2d ago•2 comments

Learning about GPUs through measuring memory bandwidth

https://www.evolvebenchmark.com/blog-posts/learning-about-gpus-through-measuring-memory-bandwidth
22•JasperBekkers•6h ago•3 comments

Sequoia backs Zed

https://zed.dev/blog/sequoia-backs-zed
221•vquemener•9h ago•149 comments

Closer to the Metal: Leaving Playwright for CDP

https://browser-use.com/posts/playwright-to-cdp
111•gregpr07•5h ago•84 comments

Why are anime catgirls blocking my access to the Linux kernel?

https://lock.cmpxchg8b.com/anubis.html
104•taviso•6h ago•136 comments

Linear scan register allocation on SSA

https://bernsteinbear.com/blog/linear-scan/
11•surprisetalk•3d ago•1 comments

Tidewave Web: in-browser coding agent for Rails and Phoenix

https://tidewave.ai/blog/tidewave-web-phoenix-rails
246•kieloo•11h ago•47 comments

Show HN: Anchor Relay – A faster, easier way to get Let's Encrypt certificates

https://anchor.dev/relay
47•geemus•5h ago•43 comments

AWS in 2025: Stuff you think you know that's now wrong

https://www.lastweekinaws.com/blog/aws-in-2025-the-stuff-you-think-you-know-thats-now-wrong/
196•keithly•5h ago•111 comments

Improvements to OCaml code editing: the basics of a refactor engine

https://tarides.com/blog/2025-08-20-internship-report-refactoring-tools-coming-to-merlin/
87•nukifw•7h ago•16 comments

Show HN: Luminal – Open-source, search-based GPU compiler

https://github.com/luminal-ai/luminal
63•jafioti•5h ago•24 comments

Show HN: Bizcardz.ai – Custom metal business cards

https://github.com/rhodey/bizcardz.ai
16•rhodey•3h ago•16 comments

How to Think About GPUs

https://jax-ml.github.io/scaling-book/gpus/
338•alphabetting•2d ago•104 comments

Mirrorshades: The Cyberpunk Anthology (1986)

https://www.rudyrucker.com/mirrorshades/HTML/
121•keepamovin•13h ago•67 comments

Show HN: Nestable.dev – local whiteboard app with nestable canvases, deep links

https://nestable.dev/about
18•anorak27•3h ago•7 comments

The Rise and Fall of Music Ringtones: A Statistical Analysis

https://www.statsignificant.com/p/the-rise-and-fall-of-music-ringtones
31•gmays•2d ago•39 comments

Show HN: What country you would hit if you went straight where you're pointing

https://apps.apple.com/us/app/leascope/id6608979884
48•brgross•6h ago•30 comments

Best Options for Using AI in Chip Design

https://semiengineering.com/best-options-for-using-ai-in-chip-design/
29•rbanffy•4h ago•7 comments
Open in hackernews

Gemma 3 270M re-implemented in pure PyTorch for local tinkering

https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/12_gemma3
250•ModelForge•7h ago

Comments

canyon289•5h ago
Hey all, I created this model with a top notch team. I answered many questions last week when this hit the front page, and happy to answer more here as well.

https://news.ycombinator.com/item?id=44902148

Personally I'm excited that you all have access to this model now and hope you all get value out of using them.

GaggiX•5h ago
I imagine you and your team have finetuned the model on different tasks, can you share some results? (I have only seen the alien NPC finetuning)
canyon289•5h ago
The Unsloth folks have finetuning numbers. Linking their post here https://www.reddit.com/r/unsloth/comments/1mq5hbb/google_gem...
WithinReason•5h ago
I would like to know your thoughts on using 2/3 of such a small the model's size for embeddings. What would be different if you used a byte-level vocabulary and spent the parameter budget on transformer parameters instead? I think you would lose performance (tok/s) but might gain accuracy.
canyon289•5h ago
At this small scale the embeddings indeed were a big focus. Consider this thought process.

The tokens themselves are a form of compression. Lets say we have the word "WaffleHouse", character level this would be 11 tokens, but with an embedder this would be perhaps 2 or 3 tokens (I didn't actually run through the tokenizer but we could verify precisely). This matters a lot for on device processing especially.

So while we could get more intelligence out of the model by bumping up the "knowledge" parameters, the device would need to process more input and output tokens.

Another advantage on small devices is the embeddings are just a lookup table which requires little to no computation. Its the rest of the parameters that have the expensive matrix multplications, so if we increased those we'd also be increasing the number of FLOPs needed for a forward pass.

This blog post explains it well. https://www.adamcasson.com/posts/transformer-flops

So all this to say is there are definite tradeoffs between model size, performance on evals, and compute cost. We ran many internal experiments with different choices to see could work well, and then picked what we believed work will best for the open community.

Scene_Cast2•4h ago
How would this matrix get trained with PyTorch? I currently have a toy Transformer network - I ended up marking the matrix as sparse and using SparseAdam - gives a bit of a performance boost, but at the same time I can't use torch.compile() on the fetch from this matrix.
WithinReason•3h ago
Makes sense, thank you.
tarruda•4h ago
Thanks for your work, it is really an amazing small LM.

Can you share what kind of hardware is necessary to train it, and how long it took?

canyon289•3h ago
Thank you!

The Gemma3 technical report contains many details on training setup https://arxiv.org/pdf/2503.19786

This was released with the initial batch of Gemma3 so it doesn't contain the 270m details, nonetheless you'll get a good idea of what it takes to build these models.

owebmaster•2h ago
Does it have function calls? Can we use it with MCP?
canyon289•2h ago
It can possibly perform basic prompted FC but I wouldn't get your hopes up. It should be to be a solild FC model if trained on specific tools and format. I would not expect great MCP performance because the context window is 32k and most MCP servers I've see implicitly assume massive context windows.
riedel•2h ago
Very stupid question: why does the tflite model output only '[multimodal][multimodal]' when executed on GPU in the AI edge gallery app, while fully working on the CPU.
n0vella•5h ago
Do you think these very small models have some utility in the real world? Apart from learning and academic purposes of course.
colechristensen•5h ago
Sure, interacting with natural language without expectation that the model contains knowledge. Good for things like tool use and embeddings where the information is all retrieved.
throw310822•5h ago
Are these small models are trained to privilege "raw intelligence" over factual knowledge? Is there any indication of how much of current model is dedicated to the knowledge of multiple languages and tons of facts rather than pure understanding and reasoning?
canyon289•5h ago
The evaluations provide this indication. You'll see MMLU, GPQA, Big Bench etc in reports for many models. Those numbers provide the indication you're looking for.

To answer a question you didn't ask. With small models especially we need to make choices as to which to focus on. For this model we focused on text summarization and instruction following, with the idea that users would finetune to gain performance on the task set that is relevant to them

canyon289•5h ago
Yes! To me the primary value is not just as a teaching or toy model. I see a lot o value in repeatable tasks if we think about enterprise and a local fast developer model for individual usage.

Here's some examples that are inspired by previous roles I had outside of Google, where a business I was working in needed real time text processing.

This tutorials were made with Gemma versions from a year ago, but could now be recreated with Gemma 270m

https://developers.googleblog.com/en/gemma-for-streaming-ml-... https://www.youtube.com/watch?v=YxhzozLH1Dk

numpad0•4h ago
It seems to be more often correct than wrong for multilingual translation tasks(source text from[1][2]). Rough, but probably useful as traveler's phrase books.

  You 
  Translate to English: Рошешуа́р (фр. Rochechouart) — місто, муніципалітет у Франції, у регіоні Нова Аквітанія, департамент Верхня В'єнна. Населення — 3637 осіб (2022)[1]. Муніципалітет розташований на відстані[2] близько 360 км на південь від Парижа, 34 км на захід від Ліможа.

  gemma-3-270m-it-qat
  Here's the translation:

  "Rochechouart is a city in France, a municipality located in the region of New Aeterna, department within the Upper Vienne Province. The population is 3637 people (2022). The city is situated at a distance of approximately 360 kilometers from Paris, 34 km from Lyon.

  You
  Translate to English: Bộ Công Thương đề xuất hỗ trợ hộ gia đình lắp điện mặt trời mái nhà, có hệ thống lưu trữ tối đa 2-3 triệu đồng, hỗ trợ lãi suất vay trong tối đa ba năm. Nội dung được Bộ Công Thương đưa ra tại dự thảo lần 3 Quyết định của Thủ tướng về chính sách hỗ trợ cho hộ gia đình lắp đặt điện mặt trời mái nhà tự sản, tự tiêu và hệ thống lưu trữ điện.

  gemma-3-270m-it-qat
  The Ministry of Industry and Trade proposes supporting households with solar panels in their homes, with a maximum savings of 2-3 million đồng, providing loan assistance for up to 3 years. The content of the Ministry's decision at the Third Review of the Government's Policy on Support for Households Installing Solar Panels in Homes, approved by the Prime Minister about the policy support for households installing solar panels themselves, and the system storage capacity is maximum 2-3 million đồng.
1: https://uk.wikipedia.org/wiki/%D0%A0%D0%BE%D1%88%D0%B5%D1%88...

2: https://vnexpress.net/lap-dien-mat-troi-mai-nha-tu-dung-co-t...

yawnxyz•1h ago
If you LoRa them you can make them VERY VERY good at a small narrow set of tasks, e.g.:

- reply in a specific way, like a specific JSON schema, or in the voice of a character - be very good at classifying text (e.g. emails, or spam) - be a great summarizer for large amounts of text, e.g. turn emails into short titles or url slugs - adding tags/categories per your pre-defined rules (e.g. for communities, tagging content, marketing) - for detecting spam, or duplicates, or flagging things

You won't be able to write code or prose with these, but they're great for a huge array of very narrow set of use cases

What's neat about "stupid" models like this is that they're less likely to go off and dream up a bunch of irrelevant content, because they don't know much about the world / won't have too much context to pull from

lsb•5h ago
That’s wild that with a KV cache and compilation on the Mac CPU you are faster than on an A100 GPU.
Weryj•4h ago
This would be because the GPU can’t fill its waveform and hide memory latency, no? I’m curious for a reason why
punnerud•4h ago
Because on Mac the CPU and GPU share memory, but A100 need to transfer to RAM/CPU on the parts that’s not supported by GPU?

(My first guess)

ladberg•1h ago
Given that the compiled version is slower than then eager version on A100, there's definitely something suboptimal happening there
ModelForge•38m ago
No the compiled version is actually faster.

From that table, the A100 tok/sec (larger is faster) numbers are:

- Eager: 28

- Compiled: 128

And

- KV cache eager: 26

- KV cache compiled: 99

The reason that the KV cache is slower is likely because it's not GPU-optimized code. On CPU the KV cache is faster. To make it faster on GPU, you would pre-allocate the tensors on the device for example instead of `torch.cat`ting them on the fly

ModelForge•42m ago
Could be an artifact of the small size not fully taking advantage of the GPU. For example, for the slightly larger Qwen3 0.6B model the A100 is faster (you can see it when scrolling to the bottom here: https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/11...)
shekhar101•4h ago
Can someone (or OP) point me to a recipe to fine tune a model like this for natural language tasks like complicated NER or similar workflows? I tried finetuning Gemma3 270M when it came out last week without any success. A lot of tutorials are geared towards chat applications and role playing but I feel this model could be great for usecases like mine where I am trying to extract clean up and extract data from PDFs with entity identification and such.
hmottestad•2h ago
Have you tried this one here by any chance?

https://huggingface.co/dslim/bert-base-NER

Just wondering if it’s worth testing and what it would be most useful for.

nolist_policy•39m ago
This is using the gemma-llm python library which uses JAX in the background: https://gemma-llm.readthedocs.io/en/latest/colab_finetuning....
lgessler•16m ago
If you're really just doing traditional NER (identifying non-overlapping spans of tokens which refer to named entities) then you're probably better off using encoder-only (e.g. https://huggingface.co/dslim/bert-large-NER) or encoder-decoder (e.g. https://huggingface.co/dbmdz/t5-base-conll03-english) models. These models aren't making headlines anymore because they're not decoder-only, but for established NLP tasks like this which don't involve generation, I think there's still a place for them, and I'd assume that at equal parameter counts they quite significantly outperform decoder-only models at NER, depending on the nature of the dataset.
keeeba•4h ago
What use-cases do you see for the 270M’s embeddings, and should we be sticking to token embeddings or can we meaningfully pool for sentence/document embeddings?

Do we need to fine-tune for the embeddings to be meaningful at the sentence/document level?

eachro•4h ago
If you wanted to train it from scratch, how long would it take on a reasonable GPU setup?
canyon289•3h ago
The world reasonable is vague but assuming you mean something that could be run in a residential unit it would long a very long time if training from pure scratch.

This is part of the rationale for releasing this model. Now you don't have to start from scratch and finetuning is reasonable on a wide variety of hardware, including reasonable GPU setups (and smaller)

rck•2h ago
For the sake of comparison, you can train a 124M model on a 3090 (see nanoGPT). In that case, each batch ends up having about 500,000 tokens and takes maybe around 10ish seconds to run forward and backward. Then the 6 trillion tokens that this model was trained on would take about 4 years, approximately. Or just "too long" for a shorter answer.
kace91•2h ago
This might be a very basic question, but as a dev whose only interaction with models is using the main commercial ones (sonnet, ChatGPT and the like), what are some usecases for these smaller local models?

What usages can be reasonable to expect from them? Are there uses out of the box or does one have to go through some custom post-training to get useful behavior?

I feel like there is a huge gap between understanding models as a user of commercial tools and the kind of discussions happening in these threads, but I’m not sure what are the in-between steps.

barrkel•2h ago
Summarization, very basic tool use, without needing to go across the internet and back, and zero cost because of edge compute.
_giorgio_•2h ago
Maybe also secrecy and privacy.
canyon289•2h ago
Its a crucial question. I wrote up a long answer here. Let me know it helps

https://news.ycombinator.com/item?id=44913558

ModelForge•47m ago
I'd say the common ones (besides educational) are

- private, on-device models (possibly with lower latency than models via web API); also edge devices

- algorithm research (faster and cheaper to prototype new ideas)

- cheap tasks, like classification/categorization; sure, you don't need a decoder-style LLM for that, but it has the advantage of being more free-form, which is useful in many scenarios; or maybe a sanity checker for grammar; or even a router to other model (GPT-5 style)

_giorgio_•2h ago
what a legend
quesne•1h ago
Thought it was a new 3270 interface, bummed.