frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

JVM Options Explorer

https://chriswhocodes.com/vm-options-explorer.html
43•0x54MUR41•2h ago•17 comments

Phyphox – Physical Experiments Using a Smartphone

https://phyphox.org/
48•_Microft•4h ago•11 comments

An Interview with Pat Gelsinger

https://morethanmoore.substack.com/p/an-interview-with-pat-gelsinger-2026
60•zdw•2d ago•24 comments

Happy Map

https://pudding.cool/2026/02/happy-map/
25•surprisetalk•4d ago•3 comments

The Miller Principle (2007)

https://puredanger.github.io/tech.puredanger.com/2007/07/11/miller-principle/
45•FelipeCortez•4d ago•34 comments

Apple update looks like Czech mate for locked-out iPhone user

https://www.theregister.com/2026/04/12/ios_passcode_bug/
225•OuterVale•4h ago•115 comments

Tofolli gates are all you need

https://www.johndcook.com/blog/2026/04/06/tofolli-gates/
82•ibobev•5d ago•17 comments

I run multiple $10K MRR companies on a $20/month tech stack

https://stevehanov.ca/blog/how-i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack
355•tradertef•6h ago•229 comments

Anthropic downgraded cache TTL on March 6th

https://github.com/anthropics/claude-code/issues/46829
127•lsdmtme•6h ago•109 comments

How We Broke Top AI Agent Benchmarks: And What Comes Next

https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
410•Anon84•17h ago•103 comments

What have been the greatest intellectual achievements? (2017)

https://www.thinkingcomplete.com/2017/09/what-have-been-greatest-intellectual.html
25•o4c•1h ago•27 comments

AI Will Be Met with Violence, and Nothing Good Will Come of It

https://www.thealgorithmicbridge.com/p/ai-will-be-met-with-violence-and
78•gHeadphone•3h ago•103 comments

Stewart Brand on how progress happens

https://www.newyorker.com/books/book-currents/stewart-brand-on-how-progress-happens
16•bookofjoe•4d ago•4 comments

Small models also found the vulnerabilities that Mythos found

https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
1140•dominicq•19h ago•305 comments

How Complex is my Code?

https://philodev.one/posts/2026-04-code-complexity/
136•speckx•5d ago•34 comments

447 TB/cm² at zero retention energy – atomic-scale memory on fluorographane

https://zenodo.org/records/19513269
230•iliatoli•16h ago•125 comments

No one owes you supply-chain security

https://purplesyringa.moe/blog/no-one-owes-you-supply-chain-security/
7•birdculture•51m ago•0 comments

Dark Castle

https://darkcastle.co.uk/
199•evo_9•16h ago•25 comments

Pijul a FOSS distributed version control system

https://pijul.org/
168•kouosi•5d ago•25 comments

The End of Eleventy

https://brennan.day/the-end-of-eleventy/
180•ValentineC•10h ago•140 comments

Apple Silicon and Virtual Machines: Beating the 2 VM Limit (2023)

https://khronokernel.com/macos/2023/08/08/AS-VM.html
207•krackers•15h ago•145 comments

Network Flow Algorithms

https://www.networkflowalgs.com/
30•teleforce•5d ago•0 comments

Cirrus Labs to join OpenAI

https://cirruslabs.org/
270•seekdeep•23h ago•132 comments

Advanced Mac Substitute is an API-level reimplementation of 1980s-era Mac OS

https://www.v68k.org/advanced-mac-substitute/
251•zdw•21h ago•62 comments

Show HN: Pardonned.com – A searchable database of US Pardons

442•vidluther•1d ago•246 comments

Internet outage in Iran reaches 1,008 hours

https://mastodon.social/@netblocks/116384935123261912
6•miadabdi•1h ago•0 comments

Surelock: Deadlock-Free Mutexes for Rust

https://notes.brooklynzelenka.com/Blog/Surelock
224•codetheweb•3d ago•72 comments

How to build a `Git diff` driver

https://www.jvt.me/posts/2026/04/11/how-git-diff-driver/
121•zdw•18h ago•13 comments

US appeals court declares 158-year-old home distilling ban unconstitutional

https://www.theguardian.com/law/2026/apr/11/appeals-court-ruling-home-distilling-ban-unconstituti...
193•Jimmc414•7h ago•173 comments

Software Preservation Group: C++ History Collection

https://softwarepreservation.computerhistory.org/c_plus_plus/
33•quuxplusone•11h ago•3 comments
Open in hackernews

MiniMax M2.7 Is Now Open Source

https://firethering.com/minimax-m2-7-agentic-model/
78•steveharing1•2h ago

Comments

steveharing1•2h ago
Nvidia is providing free API to try Minimax M2.7
ctdinjeu5•1h ago
For those who like open source so much they want to use a provider
exe34•1h ago
Could you explain the incompatibility? These seem like orthogonal axes to me.
ctdinjeu5•1h ago
Just a joke, but you’re right I can like open source and not want to self host
avaer•1h ago
I think this is the majority view.
fifthace•1h ago
If rumours OpenAI are doing 70% margins on inference and Anthropic doing 30% margins, then open weights models hosted on clouds happy with 10% margins increase competition and decrease cost. I’m game, like most. Much easier compliance with data sovereign concerns too.
zozbot234•44m ago
> OpenAI are doing 70% margins on inference and Anthropic doing 30% margins

That difference is actually pretty surprising. Is Claude that much more expensive to host? The end-user pricing seems to be pretty similar, or better for OpenAI.

Demiurge•1h ago
It should be at least cheaper if anyone can host it, no?
adrian_b•33m ago
While I would not use an external provider, that may be a rational choice for some.

The most important advantage of using open weights models is to have perfectly predictable performances and costs in the future. When you can run the model on your own hardware you are protected from price increases, subscription limits decreases or quality reductions of the provided models, like it has already happened for the users of Claude Code.

The disadvantage is that if you also want a high speed, you need more expensive hardware. You may defer the cost of buying better hardware, if you use an external provider for now, but you keep in reserve the possibility of hosting yourself the models that you are using, if anything makes the external providers worse.

aand16•1h ago
I'm wondering if anybody actually manages to use a new Nvidia account.

After logging into my shiny new Nvidia account, I'm presented with a banner saying "contact support to verify your account at help@build.nvidia.com".

I've contacted Nvidia support and haven't heard back. But they did send me a newsletter...

rvz•44m ago
With limits.

"free" does not mean what you think it means.

To Downvoters: I hope you have read the NVIDIA API Trial Terms of Service [0] before signing up. It clearly has restrictions and limitations.

From [0]:

> Unless you purchase a Subscription from NVIDIA or a Service Provider (as applicable), you may only use the API Service for internal testing and evaluation purposes, not in production. The terms and conditions of your Subscription will govern your production use of the API Service.

[0] https://assets.ngc.nvidia.com/products/api-catalog/legal/NVI...

girvo•1h ago
GGUFs are out too, well done Unsloth as usual!

https://huggingface.co/unsloth/MiniMax-M2.7-GGUF

I've been using M2.7 through the Alibaba coding plan for a bit now, and am quite impressed with it's coding ability, and even more impressed when I see how small it is. Fascinating really, makes me wonder how big the frontier models are.

wg0•1h ago
Are you talking about this: https://www.alibabacloud.com/help/en/model-studio/coding-pla...

How does it compare to z.ai GLM?

girvo•1h ago
I am!

GLM-5 (which is all I have access to on it, not the newer GLM-5.1) is slightly better for the coding tasks I'm using them for, in terms of being more accurate slightly more often. Both are very good, and very close to one another in practice

Qwen3.5-plus is also quite excellent: all of these models feel pretty similar to Sonnet 4.5 in practice, though GLM-5 can have "Opus" like reasoning through surprisingly long context chains I've found.

zozbot234•48m ago
Qwen 3.5 is great and openly available, but it seems that Qwen 3.6 will only release smaller models (TBD, but the ~300B size seems to be excluded already).
girvo•22m ago
Yeah I saw. Such a shame, I’m playing with a Q4 version of 3.5 122B-A10B on my Asus GX10, it’s kind of nuts how great a model you can run at home (with limitations of course)
hulk-konen•39m ago
I think GLM 5.1 is a step above M2.7 and Qwen 3.6. I’ve used it to do some planning when I ran out of Opus usage, and it’s done ok job. Wouldn’t trust it with some more difficult data shape edits etc., but it’s a nice option to have!

Composer 2, M2.7, and Qwen 3.6 are all capable to execute those plans just fine.

jbergqvist•1h ago
"Helped build itself" is a bit of a stretch here, it makes it sound as if the model was doing lasting self-improvements.

What the article describes is that the model was able to tweak to its own deployment harness (memory, skills, experimental loop etc) to improve performance on benchmarks. While impressive, it's not doing any modifications to its own weights by e.g. modifying the training code.

zozbot234•59m ago
By this standard, Claude "helped build itself" since Claude Code is 100% vibe coded. Not sure if this also applies somewhat to ChatGPT and Codex.
anonym29•1h ago
In addition to this conversation already having been started at https://news.ycombinator.com/item?id=47735348 yesterday, MiniMax M2.7 is not open source. The open weights have been released, which is definitely good and follows some of the spirit of open source, but isn't the same thing.
adrian_b•24m ago
While an open-source model is obviously preferable to an open-weights model, the difference between the two is much less important than the difference between an open-weights model and a proprietary model.

There are much more people who are interested only in doing model inference, for which an open-weights model is sufficient to avoid the uncertainties and costs associated with a subscription, and for enabling them to make and use better model harnesses than those offered commercially (better by being more suitable for their specific needs), than people who also want to do model training, for which an open-source model would be needed.

simonw•1h ago
Absolutely not "open source" - here's the license: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICE...

> Non-commercial use permitted based on MIT-style terms; commercial use requires prior written authorization.

And calling the non-commercial usage "MIT-style terms" is a stretch - they come with a bunch of extra restrictions about prohibited uses.

It's open weights, not open source.

zozbot234•1h ago
It's not even open weights as generally understood, the non-commercial restriction is pretty severe. The earlier M2.5 model will still be preferred for many purposes.
orlp•53m ago
I've flagged the post, the title is editorialized, the title on the blog post is "MiniMax M2.7: The Agentic Model That Helped Build Itself" (at least at the time of writing this).
zozbot234•51m ago
"Helped build itself" is arguably also a stretch as noted in another comment.
MarsIronPI•49m ago
Even the MIT-licensed weights are just that: open weights. Let's not call the weights "source", because they're emphatically not. I can't retrain Qwen from the ground up with different pre-training algorithms, for example.
zozbot234•42m ago
Model weights are source because they are "the preferred form for modification", e.g. you can use them for fine-tuning. Training a new model from raw data (1) gets you something very different from the original and (2) is computationally unfeasible for most, compared to simpler fine tuning.
littlestymaar•39m ago
I've yet to see a convincing explanation of what make such a “license” legally bounding in the first place.

There's no copyright on model weights themselves (because they are produced purely mechanically without involving human creativity, the same way there's no copyright on compiled artifacts of a piece of software or an h264 encoded movie file). For software and movies the copyright cover the source material, not the resulting binary, and for LLMs the source material can also be protected by copyright. The problem, is that LLM makers don't own most of the copyright on the source material and worse they claim the training process is transformative enough to erase the copyright of the source material so even the part of the training data for which they own copyright couldn't extend their copyright protection to the weights.

It's very likely that these licenses are entirely devoid of legal value (and I don't think Meta engaged in any legal actions (not even a DMCA takedown) on any of the bazillions llama finetunes violating the llama license on huggingface).

wg0•1h ago
In my experience, even the MiniMax M2.5 is a very capable model with decent capabilities and with some hand holding, can do good investigation into an issue deep down multiple layers of a software stack given you keep asking right questions.

I am pretty sure MiniMax M2.7 would be much better.

fg137•1h ago
What's people's experience of using MiniMax for coding?

I had a really bad time with it. I use (real) Claude Code for work so I know what a good model feels like. MiniMax's token plan is nice but the quality is really far from Claude models.

I needed to constantly "remind" it to get things done. Even for a four sentence prompt in a session that is well below the context window, MiniMax would ignore half of it. This happens all the time. (This is Claude Code + MiniMax API, set up using official instructions)

Basically, if I say get A, B and C done, it will only do A and B. I say, you still need to do C, so it does C but reverts the code for A.

Things that Claude can usually one shot takes 5 iterations with MiniMax.

I ended up switching to Claude to get one of my personal projects done.

how_gauche•1h ago
I love it. It's not quite as good as Sonnet but it's quick, and Minimax 2.5 is like 1/4 the cost of Haiku. With enough of a harness around it, almost any breed of monkey can be coerced into producing excellent typewriter work. GLM 5 and 5.1 are other really competitive options on the price/performance curve
stavros•55m ago
I haven't tried MiniMax but Claude has gotten seriously nerfed lately. A few weeks ago I could code all week on the $100/mo plan without getting close to the limit, now I consumed half the limit in the first day.

Ridiculous, my company has committed to $200k annual plans and they changed the deal mid-way. We'll have to see about a refund.

helix278•56m ago
> That is not a benchmark result. That is a different way of thinking about how AI models get built.

tiresome

mr_johnson123•50m ago
It’s seems not to be completely open source.