frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

MiniMax-M1 open-weight, large-scale hybrid-attention reasoning model

https://github.com/MiniMax-AI/MiniMax-M1
158•danboarder•4h ago•26 comments

Terpstra Keyboard

http://terpstrakeyboard.com/web-app/keys.htm
26•xeonmc•1h ago•3 comments

Scrappy - make little apps for you and your friends

https://pontus.granstrom.me/scrappy/
211•8organicbits•6h ago•73 comments

I counted all of the yurts in Mongolia using machine learning

https://monroeclinton.com/counting-all-yurts-in-mongolia/
65•furkansahin•3h ago•12 comments

The Grug Brained Developer (2022)

https://grugbrain.dev/
815•smartmic•15h ago•346 comments

Honda conducts successful launch and landing of experimental reusable rocket

https://global.honda/en/topics/2025/c_2025-06-17ceng.html
1086•LorenDB•20h ago•332 comments

Show HN: Lstr – A modern, interactive tree command written in Rust

https://github.com/bgreenwell/lstr
145•w108bmg•9h ago•41 comments

A Straightforward Explanation of the Good Regulator Theorem

https://www.lesswrong.com/posts/JQefBJDHG6Wgffw6T/a-straightforward-explanation-of-the-good-regulator-theorem
22•surprisetalk•3d ago•2 comments

Building Effective AI Agents

https://www.anthropic.com/engineering/building-effective-agents
399•Anon84•17h ago•72 comments

OpenSERDES – Open Hardware Serializer/Deserializer (SerDes) in Verilog

https://github.com/SparcLab/OpenSERDES
46•peter_d_sherman•8h ago•4 comments

3D-printed device splits white noise into an acoustic rainbow without power

https://phys.org/news/2025-06-3d-device-white-noise-acoustic.html
159•rbanffy•2d ago•31 comments

What Google Translate can tell us about vibecoding

https://ingrids.space/posts/what-google-translate-can-tell-us-about-vibecoding/
195•todsacerdoti•16h ago•110 comments

Now might be the best time to learn software development

https://substack.com/home/post/p-165655726
239•nathanfig•20h ago•157 comments

Resurrecting a dead torrent tracker and finding 3M peers

https://kianbradley.com/2025/06/15/resurrecting-a-dead-tracker.html
547•k-ian•18h ago•168 comments

Making 2.5 Flash and 2.5 Pro GA, and introducing Gemini 2.5 Flash-Lite

https://blog.google/products/gemini/gemini-2-5-model-family-expands/
325•meetpateltech•19h ago•188 comments

Why JPEGs still rule the web (2024)

https://spectrum.ieee.org/jpeg-image-format-history
181•purpleko•20h ago•320 comments

Proofs Without Words

https://artofproblemsolving.com/wiki/index.php/Proofs_without_words
60•squircle•4d ago•11 comments

LLMs pose an interesting problem for DSL designers

https://kirancodes.me/posts/log-lang-design-llms.html
171•gopiandcode•16h ago•106 comments

Preparation of a neutral nitrogen allotrope hexanitrogen C2h-N6

https://www.nature.com/articles/s41586-025-09032-9
7•bilsbie•2d ago•5 comments

Introduction to the A* Algorithm

https://www.redblobgames.com/pathfinding/a-star/introduction.html
4•auraham•1d ago•0 comments

Timescale Is Now TigerData

https://www.tigerdata.com/blog/timescale-becomes-tigerdata
120•pbowyer•20h ago•84 comments

Bzip2 crate switches from C to 100% Rust

https://trifectatech.org/blog/bzip2-crate-switches-from-c-to-rust/
284•Bogdanp•15h ago•129 comments

I Wrote a Compiler

https://blog.singleton.io/posts/2021-01-31-i-wrote-a-compiler/
77•ingve•3d ago•39 comments

Benchmark: snapDOM vs html2canvas

https://zumerlab.github.io/snapdom/
34•jmm77•4h ago•16 comments

Time Series Forecasting with Graph Transformers

https://kumo.ai/research/time-series-forecasting/
100•turntable_pride•17h ago•29 comments

Dinesh’s Mid-Summer Death Valley Walk (1998)

https://dineshdesai.info/dv/photos.html
63•wonger_•11h ago•23 comments

Strangers in the Middle of a City: The John and Jane Does of L.A. Medical Center

https://www.latimes.com/science/story/2025-06-15/l-a-seeks-help-for-a-patient-with-no-name
28•dangle1•2d ago•10 comments

Show HN: I made an online Unicode Cuneiform digital clock

https://oisinmoran.com/sumertime
89•OisinMoran•3d ago•25 comments

Foundry (YC F24) Hiring Early Engineer to Build Web Agent Infrastructure

https://www.ycombinator.com/companies/foundry/jobs/azAgJbN-foundry-software-engineer-new-grad-to-mid-level
1•lakabimanil•14h ago

From SDR to 'Fake HDR': Mario Kart World on Switch 2

https://www.alexandermejia.com/from-sdr-to-fake-hdr-mario-kart-world-on-switch-2-undermines-modern-display-potential/
90•ibobev•16h ago•77 comments
Open in hackernews

MiniMax-M1 open-weight, large-scale hybrid-attention reasoning model

https://github.com/MiniMax-AI/MiniMax-M1
156•danboarder•4h ago

Comments

swyx•2h ago
1. this is apparently MiniMax's "launch week" - they did M1 on Monday and Hailuo 2 on Tuesday (https://news.smol.ai/issues/25-06-16-chinese-models). remains to be seen if they can keep up the pace of model releases for the rest of this week - these 2 were big ones, they aren't yet known for much else beyond llm and video models. just watch https://x.com/MiniMax__AI for announcements.

2. minimax m1's tech report is worthwhile: https://github.com/MiniMax-AI/MiniMax-M1/blob/main/MiniMax_M... while they may not be the SOTA open weights model, they do make some very big/notable claims on lightning attention and their GRPO variant (CISPO).

(im unaffiliated, just sharing what ive learned so far since no comments have been made here yet

noelwelsh•2h ago
A few thoughts:

* A Singapore based company, according to LinkedIn. There doesn't seem to be much of a barrier to entry to building a very good LLM.

* Open weight models + the development of Strix Halo / Ryzen AI Max makes me optimistic that running great LLMs locally will be relatively cheap in a few years.

rfoo•2h ago
> A Singapore based company, according to LinkedIn

Nah, this is a Shanghai-based company.

diggan•17m ago
No, they're based in Ireland.

Are we just saying stuff now? At least provide some sort of source if you're gonna say someone is wrong.

noelwelsh•9m ago
https://en.wikipedia.org/wiki/MiniMax_(company)
diggan•4m ago
Wikipedia in itself is no source, and after reading parents message I went there to check to and surprise surprise, neither of the statements have sources attached to it.
manc_lad•1h ago
It seems more and more like an inevitability we will run models locally. Exciting and concerning implications.

If anyone has any suggestions of people thinking about this space they respect, I'd love to listen to more ideas and thoughts on the developments.

noelwelsh•1h ago
I think the main limitation, right now, is hardware. For GPUs the main limit is the VRAM available on consumer models. CPUs have plenty of memory but don't have the bandwidth or vector compute power for LLMs. This is why I think the Strix Halo is so exciting: it has bandwidth + compute power plus a lot of memory. It's not quite where it needs to be to replace a dedicated GPU, but in a few iterations it could be.

I'm interested in other opinions. I'm no expert on this stuff.

jb1991•50m ago
How does the shared memory model for GPUs on Apple Silicon factor into this? These are technically consumer grade and not very expensive, but they can offer a huge amount of memory since all the memory is shared between CPU and GPU, even a midtier machine can easily have 100 GB of GPU memory.
noelwelsh•36m ago
If you squint the M4 is the same as the Strix Halo. Roughly:

* Double the bandwidth

* Half the compute

* Double the price for comparable memory (128GB)

I'm more interested in the AMD chips because of cost plus, while I have an Apple laptop, I do most of my work on a Linux desktop. So a killer AMD chip works better for me. If you don't mind paying the Apple tax then a Mac is a viable option. I'm not sure on the software side of LLMs on Apple Silicon but I cannot imagine it's unusable.

An example of desktop with the Strix Halo is the Framework desktop (AI Max+ 395 is the marketing name for the Strix Halo chip with the most juice): https://frame.work/gb/en/products/desktop-diy-amd-aimax300/c...

pantulis•1h ago
Honest question: what is the concerning aspect to it?
vintermann•1h ago
"We publicly release MiniMax-M1 at this https url" in the arxiv paper, and it isn't a link to an empty repo!

I like these people already.

npteljes•1h ago
This is stated nowhere on the official pages, but it's a Chinese company.

https://en.wikipedia.org/wiki/MiniMax_(company)

iLoveOncall•1h ago
Why would you expect them to mention that on their project's page?
noelwelsh•1h ago
1. It's conventional to do so.

2. It's a legal requirement in some jurisdictions (e.g. https://www.gov.uk/running-a-limited-company/signs-stationer...)

3. It's useful for people who may be interested in applying for jobs

iLoveOncall•22m ago
1. No it's not. Top GitHub repository from Google as an example: https://github.com/google/material-design-icons I think you'd actually be hard pressed to find a single repository where the company that owns it lists where they are registered.

2. This is a requirement for companies registered in the UK. You should also read your own link, it doesn't say anything about the company's presence on 3rd party websites.

3. This is such a remote reason it's laughable, there are plenty more things that are more relevant to potential job applications, such as whether they are hiring at all or not.

You just want them to mention it because it's a Chinese company. If they were American, Mexican, German or Zimbabwean you wouldn't give the slightest fuck.

noelwelsh•11m ago
OP said "official pages", which I took to mean the company website: https://www.minimax.io/

Also, thanks for putting words in my mouth. If they were Mexican or Zimbabwean I would find it very interesting to see a roughly SOtA model coming from that country.

spinningarrow•22m ago
> It's conventional to do so

Where do you see that? e.g. I just checked https://openai.com/about/ and it doesn't say where they are based. I have no associations either way, but I usually have to work hard to find out where startups are based.

diggan•20m ago
> 1. It's conventional to do so.

I can't say I remember any model/weights release including the nation where the authors happen to live or where the company is registered. Usually they include some details about what languages they've included to train on, and disclose some of their relationships, which you could use for inferring that from.

But is it really a convention to include the nation the company happen to be registered in, or where the authors live, in submitted papers? I think that'd stick out more to me, than a paper missing such a detail.

noelwelsh•10m ago
OP said "official pages", which I took to mean the company website: https://www.minimax.io/ not the repo or the paper.
htrp•1h ago
They apparently building buzz for an IPO

https://www.bloomberg.com/news/articles/2025-06-18/alibaba-b...

markkitti•40m ago
Please come up with better names for these models. This sounds like the processor in my Mac Studio.
chvid•29m ago
https://en.wikipedia.org/wiki/Minimax

They named themselves after a classic ai algorithm.

diggan•23m ago
Also sounds like my long lost dog whose name was Max but he was tiny. Absolutely horrible name, borderline criminal I say.
reedlaw•15m ago
In case you're wondering what it takes to run it, the answer is 8x H200 141GB [1] which costs $250k [2].

1. https://github.com/MiniMax-AI/MiniMax-M1/issues/2#issuecomme...

2. https://www.ebay.com/itm/335830302628