frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Microsoft and OpenAI end their exclusive and revenue-sharing deal

https://www.bloomberg.com/news/articles/2026-04-27/microsoft-to-stop-sharing-revenue-with-main-ai...
213•helsinkiandrew•2h ago•194 comments

"Why not just use Lean?"

https://lawrencecpaulson.github.io//2026/04/23/Why_not_Lean.html
103•ibobev•1h ago•35 comments

The Woes of Sanitizing SVGs

https://muffin.ink/blog/scratch-svg-sanitization/
39•varun_ch•49m ago•7 comments

Apple is dropping AFP/TimeCapsule support in macOS 27

https://eclecticlight.co/2026/04/23/networking-changes-coming-in-macos-27/
35•pvtmert•45m ago•21 comments

Show HN: OSS Agent I built topped the TerminalBench on Gemini-3-flash-preview

https://github.com/dirac-run/dirac
169•GodelNumbering•3h ago•63 comments

Pgbackrest is no longer being maintained

https://github.com/pgbackrest/pgbackrest
313•c0l0•5h ago•151 comments

4TB of voice samples just stolen from 40k AI contractors at Mercor

https://app.oravys.com/blog/mercor-breach-2026
210•Oravys•6h ago•83 comments

Men who stare at walls

https://www.alexselimov.com/posts/men_who_stare_at_walls/
189•aselimov3•5h ago•94 comments

Fully Featured Audio DSP Firmware for the Raspberry Pi Pico

https://github.com/WeebLabs/DSPi
176•BoingBoomTschak•2d ago•35 comments

Tendril – a self-extending agent that builds and registers its own tools

https://github.com/serverless-dna/tendril
33•walmsles•2h ago•12 comments

FDA Approves First-Ever Gene Therapy for Treatment of Genetic Hearing Loss

https://www.fda.gov/news-events/press-announcements/fda-approves-first-ever-gene-therapy-treatmen...
103•JeanKage•6h ago•41 comments

Den stora Älgvandringen – The great moose migration (live)

https://www.svtplay.se/video/jXv3A5G/den-stora-algvandringen/idag-00-00
25•donjoe•2d ago•2 comments

Flipdiscs

https://flipdisc.io
452•skogstokig•4d ago•76 comments

US Supreme Court Reviews Police Use of Cell Location Data to Find Criminals

https://www.nytimes.com/2026/04/27/us/politics/supreme-court-cell-data-geofence.html
37•unethical_ban•51m ago•5 comments

Show HN: Utilyze – an open source GPU monitoring tool more accurate than nvtop

https://www.systalyze.com/utilyze
7•ManyaGhobadi•2h ago•0 comments

I bought Friendster for $30k – Here's what I'm doing with it

https://ca98am79.medium.com/i-bought-friendster-for-30k-heres-what-i-m-doing-with-it-d5e8ddb3991d
999•ca98am79•19h ago•522 comments

GitHub Copilot is moving to usage-based billing

https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/
38•frizlab•18m ago•8 comments

Managing the Unmanaged Switch

https://watchmysys.com/blog/2026/03/managing-the-unmanaged-switch/
19•luu•2d ago•3 comments

Running local LLMs offline on a ten-hour flight

https://deploy.live/blog/running-local-llms-offline-on-a-ten-hour-flight/
63•darccio•3h ago•43 comments

Dutch central bank ditches AWS and chooses Lidl for European Cloud

https://www.techzine.eu/news/infrastructure/140634/dutch-central-bank-chooses-lidl-for-european-c...
37•benterix•1h ago•20 comments

Quarkdown – Markdown with Superpowers

https://quarkdown.com/
127•amai•7h ago•20 comments

AI should elevate your thinking, not replace it

https://www.koshyjohn.com/blog/ai-should-elevate-your-thinking-not-replace-it/
728•koshyjohn•20h ago•516 comments

Understanding the short circuit in solid-state batteries

https://www.mpie.de/5151287/short-circuit-solid-state-batteries
27•hhs•1d ago•4 comments

Show HN: A terminal spreadsheet editor with Vim keybindings

https://github.com/garritfra/cell
47•garritfra•4h ago•22 comments

Getting my daily news from a dot matrix printer 2024

https://aschmelyun.com/blog/getting-my-daily-news-from-a-dot-matrix-printer/
59•xupybd•2d ago•9 comments

TurboQuant: A first-principles walkthrough

https://arkaung.github.io/interactive-turboquant/
252•kweezar•14h ago•54 comments

Self-updating screenshots

https://interblah.net/self-updating-screenshots
412•bjhess•1d ago•68 comments

The Prompt API

https://developer.chrome.com/docs/ai/prompt-api
224•gslin•14h ago•116 comments

Supreme Court to Hear Arguments in Landmark Roundup Weedkiller Case

https://www.nytimes.com/2026/04/26/climate/supreme-court-bayer-monsanto-roundup-glyphosate.html
13•mikhael•36m ago•2 comments

Canva apologizes after its AI tool replaces 'Palestine' in designs

https://www.theverge.com/ai-artificial-intelligence/919028/canva-magic-layers-ai-replacing-palestine
16•alex_suzuki•1h ago•1 comments
Open in hackernews

Running local LLMs offline on a ten-hour flight

https://deploy.live/blog/running-local-llms-offline-on-a-ten-hour-flight/
63•darccio•3h ago

Comments

ddarolfi•1h ago
Qwen 4.6 36B? Do they mean Qwen3.6-35B-A3B?
mikeatlas•1h ago
yes
Johnny_Bonk•1h ago
So I have a RTX 3080 10GB VRAM which I've been using with Qwen2.5 Coder and Gemma 4 E2B. Im wondering what models you have tried with which quants.
trvz•1h ago
Yes. The author is really sloppy if that wasn’t clear from the article.
deanc•1h ago
This has been exactly my experience too. I've tried multiple harnesses (pi, claude code, codex) with multiple variants of qwen3.6 and gemma4 driven by both o mlx and ollama - and every single time I try to do anything meaningful I end up in a loop. On a 64GB Macbook Pro M3 Max.

I really don't know what the hell people are doing locally, and suspect a lot of the hype around running these models locally is bullshit. Sure, you can make it do something but certainly nothing useful or substantial.

ryandrake•1h ago
Same here. Every time a new local model comes out, I give it a spin with a pretty vanilla coding task ("refactor this method to take two parameters instead of one", or "fix this class of compiler warning across the ~20 file codebase") and more often than not, they get in endless loops, or fail in very unusual ways. They don't yet even approach the usefulness of SOTA models. It's obviously not a fair comparison, though. My 20GB GPU is never going to beat whatever enormous backend Google or Anthropic have.
2ndorderthought•1h ago
You can do this with really small models but you have to do a more legwork. I wouldn't expect most trivially small models to handle anything more than 1 file reliably. The new qwen 3.6 is different though, I have heard cases where it is behaving close to sonnet.

That said I don't see why people are so scared to touch code even if it saves them 500 euro a month. Using my IDEs find across my repo and auto replacing 2 patterns is trivial to do and way faster to do by hand. I mostly use small models, it prevents a lot of the issues I've seen with large models and vibe/agentic coding medium to long term. I also write a lot of code.

proxysna•1h ago
You need to set sampling parameters for the llm. Had the same issue with Qwen3.5 when i first started. You can grab them off the model card page usually.

From Qwen3.6 page:

Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

Instruct (or non-thinking) mode: temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0

deanc•1h ago
Yes, have tried all of these (as per the docs). Have you actually tried these? Because I have tried all 3 configurations with agentic coding that you mentioned and have the same result - loops.
2ndorderthought•1h ago
In the article the author describes what they made. It's definitely not bullshit, but it's also not as reliable or as handsfree as the 1t models.

For people who aren't completely vibe or agent coding these models are better than say copilot or the free models appearing after a Google search. Probably better than chatgpts flagships in some ways.

I mostly use 4b to 9b models for basic inquiries and code examples from libraries I haven't used before. Many of them can solve pretty hard math problems, and these are several steps away from say qwen3.6.

I would not discount running models locally. It's the best case scenario of a future with LLMs from a human rights and ecological perspective.

mft_•1h ago
I’m frequently surprised how little I can find online about exactly this - different harnesses for local models and how to set them up. The documentation for opencode with local models is (IMO) pretty bad - and even Claude Opus (!) struggled to get it running. And so far I’ve not found a decent alternative to Claude Desktop.

(I’ve recently discovered that you can pipe local models into Claude’s Code and Desktop, so this is on my list to try).

2ndorderthought•43m ago
Qwen3.6 is brand new. But also, search engines are so plastered with AI slop that is written by tools and companies that have no interest in you using local models. Ollama makes it 1 command to run local small models, but with the newest ones there can be kinks to work out first.

/R/localllama is okay for some information but beyond that there is so much noise and very little signal. I think it's intentional.

NitpickLawyer•53m ago
> a lot of the hype around running these models locally is bullshit. Sure, you can make it do something but certainly nothing useful or substantial.

There is certainly a lot of hype around local models. Some of it is overhype, some of it is just "people finding out" and discovering what cool stuff you can do. I suspect the post is a reply to the other one a few days ago where someone from hf posted a pic with them in the plane, using a local model, and saying it's really really close to opus. That was BS.

That being said, I've been working with local LMs since before chatgpt launched. The progress we've made from the likes of gpt-j (6B) and gpt-neoX (22B) (some of the first models you could run on regular consumer hardware) is absolutely amazing. It has gone way above my expectations. We're past "we have chatgpt at home" (as it was when launched), and now it is actually usable in a lot of tasks. Nowhere near SotA, but "good enough".

I will push back a bit on the "substantial" part, and I will push a lot on "nothing useful". You can, absolutely get useful stuff out of these models. Not in a claude-code leave it to cook for 6 hours and get a working product, but with a bit of hand holding and scope reduction you can get useful stuff. When devstral came out (24B) I ran it for about a week as a "daily driver" just to see where it's at. It was ok-ish. Lots of hand holding, figured out I can't use it for planning much (looked fine at a glance, but either didn't make sense, or used outdated stuff). But with a better plan, it could handle implementation fine. I coded 2 small services that have been running in prod for ~6mo without any issues. That is useful, imo. And the current models are waaay better than devstral1.

As to substantial, eh... Your substantial can be someone else's taj mahal, and their substantial could be your toy project. It all depends. I draw the line at useful. If you can string together a couple of useful tasks, it starts to become substantial.

xienze•41m ago
It's probably a combination of things:

* New models running in llama.cpp (what's under the hood of ollama et al) frequently require bug fixes.

* The GGUF models that run in llama.cpp frequently require bug fixes (Unsloth is notorious for this -- they release GGUF models about 10 minutes after official .safetensors releases).

* You're probably running a <Q8 quantization of the model, and a good chance <BF16 quantization for KV cache. This makes for compounding issues as context grows and tool calls multiply.

Local models really are great but I think a major problem are the people in groups like r/localllama who run models at absurd quantization levels in order to cram them on their underpowered hardware and convince themselves that they're running SOTA at home.

The best way to run these models is, frankly, a lot of VRAM and vLLM (which is what the people developing these models are almost certainly targeting).

vladgur•1h ago
That window seat with the 14” laptop seems extremely claustrophobic.

That’s the real limitation on an economy flight - space rather than power or the internet… at least it would be for me.

The only times I was able to get my laptop out and do some productive work was when I either was sitting in premium economy isle seat with room to spare or when there was an empty seat next to me

rootusrootus•1h ago
I'd probably choose the window seat myself, because while it is cramped, it is predictably so. When I sit in an aisle seat, it's not as cramped but I regularly get shoulder checked by passing people or beverage carts.

What really makes me nervous if I'm in an economy seat is the seat in front of me. Depending on how the seat is designed, if the person suddenly reclines (or hell, just flexes the seat a bunch while moving around), it can come pretty close to pinching the laptop screen. That would be bad news.

ryandrake•58m ago
That was the first thing I thought of when I saw the image. That's a very expensive computer that you risk destroying when the 300lb guy in front of you decides to lean back.

The ergonomics of using a laptop on an economy-class tray table are not worth it. You're sitting there like a T-rex trying to make your arms as small as possible to tap on the keys. And the vertical viewing angle to your screen sometimes prevents you from even seeing anything. I wouldn't even bring my laptop out during a flight.

walthamstow•55m ago
In the image it's on his lap, not the tray table. I agree, using the tray is not worth it. The ideal is a tray that folds in half so I can use that to hold a drink and keep the machine on my lap.

The tradeoff of poor comfort is insane productivity, for me anyway. Being restricted in place, no wifi, inconvenient toilet breaks, not in control of meal times, all means I get a lot of work done

sweetjuly•16m ago
>The ergonomics of using a laptop on an economy-class tray table are not worth it. You're sitting there like a T-rex

The trick I've found is to pack a bluetooth keyboard. If you put your laptop on the tray table, you can put the bluetooth keyboard on your legs _under_ the tray table and have your arms fully and comfortably extended. This works especially well if you're a vim/emacs/other keyboard driven editor user as you very rarely need to reach up to poke the trackpad .

bs7280•43m ago
I have a 16" M1 Max that I only got because it was $1500 cheaper than MSRP, and it sucks on planes. I have really long arms and I can barely get it out of my bag without elbowing my neighbor.

A few years ago I saw some very interesting custom ergonomic setups optimized for traveling + flying.

One person with a thinkpad is able to get the monitor to be 180 degrees flat w/ the keyboard, and can hang it off the seat. He also brings a split ergo keyboard with a lap mount.

Another person did something similar with a M1 laptop, but needs an Ipad to act as the external monitor (laptop stays in bag) with a built and designed from scratch split ergo keyboard.

zdw•42m ago
That's a 16" (from the size of the speaker grille on each side of the keyboard), so even more claustrophobic.
stavros•25m ago
I got some Xreal glasses and it's made flights so much more enjoyable. I can watch movies or work on something lying back, and the "screen" looks massive.
JSR_FDED•23m ago
I’ve been so tempted but some of the reviews say it’s not good for reading code. What’s been your experience? What is the effective resolution of the screen you get? Is it sharp enough for coding?
stavros•19m ago
It's a definite "it depends". The resolution is fine, but I think it's more about the specific pair of glasses you get? I got the same model three times (long story), and the first two were fine, the third has some blurring in the middle of the right eye.

It's also uncomfortable to look at the very bottom of the screen (which is where all the chat text boxes are), so I usually resize all my windows to be a bit smaller. With that, it's very good (and you can always just increase the font size).

I would like glasses with smaller fov, so I didn't have to look around so much, but that's probably just me, since everyone else likes them larger.

bobro•1h ago
Can’t you guys just read a book and take a nap?
3form•1h ago
I suppose the ones that do, wouldn't consider such a turn of events postworthy.
cpursley•33m ago
I'm jealous of people who can actually get comfortable enough to sleep on flights.
koolba•21m ago
With enough drinks and a long enough flight, it’s unavoidable.
fernie•14m ago
The keyword being "comfortable".

Most certainly avoidable, unfortunately.

stavros•30m ago
Why would I do that when making things is so much fun?
mdni007•29m ago
But then how can I show random people how productive I am?
ducttape12•9m ago
Yeah, for real. Imagine being so addicted to the AI slot machine that you can't be without it for 10 hours.
dude250711•4m ago
If you nap, then you might end up living in a world where someone else is making the world a better place better than you are.
j1000•1h ago
To be honest, I think possibility to work and travel is con rather than perk of current times.
ryandrake•53m ago
It hit different at different points in my life. When I was in my 20s I thought "Wow! I get to go on an international trip to a place I've never been, and work is paying for everything?!? I'll go whenever you need me to go!" Now that I'm almost 50, it's "Fuck. Another 14 hour international flight, to somewhere I'll likely only have time to see the inside of two buildings. What's the local language again? Do I drive on the left or right? Wait, how long do I need to stay? Please no."
HoldOnAMinute•39m ago
They keep removing the ability for you to have any downtime.
bilekas•1h ago
Trying LLM in the air with a 6.200 EUR laptop... Sorry if it's not exactly relatable..
builderminkyu•58m ago
tried doing exactly this with ollama on a cross-country flight last month. my macbook basically turned into a jet engine and the battery died in under an hour.

curious if you had to heavily throttle the cpu or stick to super small quants (like 4 bit phi3) to actually make it through 10 hours without a power outlet?

tamimio•53m ago
Can’t wait for more people to do the same and eventually getting laptops banned on board due fear of catching fire..
scastiel•52m ago
Interesting, I did and document the same kind of experiment a few months ago [1], it looks like so much changed since then!

[1] https://betweentheprompts.com/40000-feet/

mumbisChungo•31m ago
>Qwen 4.6 36B

Did the author mean Qwen3.6-27B? Qwen3.6-35B-A3B?

walrus01•23m ago
As much as it's a fun gimmick to run a relatively good sized LLM like qwen 3.6 35B locally, I would much rather have the ability to run it remotely on a piece of hardware I control via VPN session. Much better on battery life and heat. If I'm on an airplane I care about having as much battery life as possible.

Let's say you have a basic setup like llama.cpp and llama-server on a remote server (even if it's just sitting under your home office desk) running a 35GB Q8 quantized model of qwen 3.6 35B, it's not difficult to make llama-server available to your laptop over just about any form of internet connection and VPN.

Having the ability to run that same model locally if you really need to because no internet connection whatsoever is available, but the times that you simultaneously have no internet and a serious need for something the model can output are fairly rare these days.

seattle_spring•2m ago
With more and more flights offering Starlink, I don't see why this would really ever be necessary.

Also, agreed with the other commenters: just read a damn book and take a nap.