frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

OpenAI bringing back GPT-4o to ChatGPT Plus users

https://old.reddit.com/r/ChatGPT/comments/1mkae1l/comment/n7nelhh/
2•rob•1m ago•0 comments

Show HN: New Angular OpenAPI Client gen (looking for testers)

https://ng-openapi.dev/
1•tjami•2m ago•0 comments

Ask HN: Does No Response Mean a Bad Idea?

1•samehsbs•3m ago•0 comments

Jim Lovell Has Died

https://en.wikipedia.org/wiki/Jim_Lovell
1•ColinWright•4m ago•1 comments

ChatGPT Will Apologize for Anything

https://www.aiweirdness.com/chatgpt-will-apologize-for-anything/
2•xnx•4m ago•0 comments

Apollo 13 Commander Jim Lovell has passed away

https://www.nasa.gov/news-release/acting-nasa-administrator-reflects-on-legacy-of-astronaut-jim-lovell/
2•LorenDB•6m ago•0 comments

Show HN: HackMaster Pi – A $30 Flipper Zero Alternative Built with Raspberry Pi

https://github.com/1PingSun/HackMaster-Pi
1•1ping•7m ago•0 comments

How to Teach Your Kids to Play Poker: Start with One Card

https://www.bloomberg.com/news/articles/2025-08-08/how-to-teach-your-kids-poker-with-one-card-at-age-four
1•ioblomov•7m ago•1 comments

ChatGPT-5 Can't Do Basic Math

5•MarcellusDrum•11m ago•0 comments

Security alerts in Gmail. What a mess

2•chrisjj•12m ago•0 comments

GPT-5 AMA

https://www.reddit.com/r/ChatGPT/s/37th7HY644
2•IdealeZahlen•13m ago•0 comments

Johns Hopkins is building its AI wargaming tools for DoD

https://breakingdefense.com/2025/08/johns-hopkins-is-building-classified-versions-of-its-ai-wargaming-tools-for-dod-ic/
1•geox•13m ago•0 comments

Fears of population collapse in the US are based on faulty assumptions

https://theconversation.com/fears-that-falling-birth-rates-in-us-could-lead-to-population-collapse-are-based-on-faulty-assumptions-261031
1•PaulHoule•14m ago•0 comments

GPT-5 Rollout Updates

https://twitter.com/sama/status/1953893841381273969
3•tosh•15m ago•0 comments

Cordoomceps – replacing an Amiga's brain with Doom

https://mjg59.dreamwidth.org/73001.html
1•LorenDB•15m ago•0 comments

Millions are flocking to grow virtual gardens in Roblox game created by teenager

https://apnews.com/article/roblox-game-grow-garden-trend-2f5e4368448d57002d08b1b3d4a289ca
1•petethomas•18m ago•1 comments

The Illustrated TLS 1.2 Connection

https://tls12.xargs.org/
1•dmazin•19m ago•0 comments

The surprising economics of the meat industry – Lewis Bollard

https://www.dwarkesh.com/p/lewis-bollard
2•paulpauper•20m ago•0 comments

Job growth has slowed sharply; the question is why

https://stayathomemacro.substack.com/p/job-growth-has-slowed-sharply-the
14•paulpauper•20m ago•4 comments

Campaigning for Extinction:Eradication of Sparrows and the Great Famine in China

https://www.nber.org/papers/w34087
1•paulpauper•20m ago•0 comments

GRETA to Open a New Eye on the Nucleus

https://newscenter.lbl.gov/2025/08/08/greta-to-open-a-new-eye-on-the-nucleus/
1•gnabgib•21m ago•0 comments

HTTP Is Not Simple

https://daniel.haxx.se/blog/2025/08/08/http-is-not-simple/
4•thunderbong•23m ago•1 comments

Looking for Testers for an AI Privacy Platform

https://scanonai.carrd.co
1•lotuslabs•24m ago•1 comments

Three Tiers of Responses to Fact

https://medium.com/on-history/three-tiers-of-responses-to-fact-9b551f2a4fb6
2•wsgeorge•26m ago•0 comments

Toxic convenience: what science tells us about plastic's hidden costs

https://www.rfi.fr/en/international/20250808-toxic-convenience-what-science-tells-us-about-plastic-s-hidden-costs
2•everybodyknows•28m ago•0 comments

ChatGPT users hate GPT-5's overworked secretary energy, miss their GPT-4o buddy

https://arstechnica.com/ai/2025/08/chatgpt-users-outraged-as-gpt-5-replaces-the-models-they-love/
5•rntn•28m ago•0 comments

Welcome to DIY Rich Guy Fantasy Camp

https://www.theglobeandmail.com/arts/article-diy-rich-guy-fantasy-camp-mandle-cheung-bezos-ackman/
2•throw0101a•31m ago•1 comments

FIN - Fish Extensible Text Editor Written in Fish

https://codeberg.org/Digit/fin/
2•ashitlerferad•32m ago•0 comments

json2dir: a JSON-to-directory converter, a fast alternative to home-manager

https://github.com/alurm/json2dir
6•alurm•32m ago•0 comments

M5 MacBook Pro No Longer Coming in 2025

https://www.macrumors.com/2025/07/10/no-m5-macbook-pro-2025/
9•behnamoh•34m ago•0 comments
Open in hackernews

Open SWE: An open-source asynchronous coding agent

https://blog.langchain.com/introducing-open-swe-an-open-source-asynchronous-coding-agent/
27•palashshah•3h ago

Comments

dabockster•2h ago
> We believe that all agents will long more like this in the future - long running, asynchronous, more autonomous. Specifically, we think that they will:

> Run asynchronously in the cloud

> cloud

Reality check:

https://huggingface.co/Menlo/Jan-nano-128k-gguf

That model will run, with decent conversation quality, at roughly the same memory footprint as a few Chrome tabs. It's only a matter of time until we get coding models that can do that, and then only a further matter of time until we see agentic capabilities at that memory footprint. I mean, I can already get agentic coding with one of the new Qwen3 models - super slowly, but it works in the first place. And the quality matches or even beats some of the cloud models and vibe coding apps.

And that model is just one example. Researchers all over the world are making new models almost daily that can run on an off-the-shelf gaming computer. If you have a modern Nvidia graphics card, you can run AI on your own computer totally offline. That's the reality.

koakuma-chan•1h ago
Do you know what "MCP-based methodology" is? I am skeptical of a 4B model scoring twice as high as Gemini 2.5 Pro
dabockster•1h ago
Yeah I know about Model Context Protocol. But it's still only a small part of the AI puzzle. I'm saying that we're at a point now where a whole AI stack can run, in some form, 100% on-device with okayish accuracy. When you think about that, and where we're headed, it makes the whole idea of cloud AI look like a dinosaur.
koakuma-chan•1h ago
I mean, I am asking what "MCP-based methodology" is, because it doesn't make sense for a 4B model to outperform Gemini 2.5 Pro et al by that much.
toshinoriyagi•39m ago
I'm not too sure what "MCP-based methodology" is, but Jan-nano-128k is a small model specifically designed to be able to answer in-depth questions accurately via tool-use (researching in a provided document or searching the web).

It outperforms those other models, which are not using tools, thanks to the tool use and specificity.

Because it is only 4B parameters, it is naturally terrible at other things I believe-it's not designed for them and doesn't have enough parameters.

In hindsight, "MCP-based methodology" likely refers to its tool-use.

Martinussen•1h ago
Data storage has gotten cheaper and more efficient/manageable every year for decades, yet people seem content with having less storage than a mid-range desktop from a decade and a half ago, split between their phone and laptop, and leaving everything else to the "> cloud" - I wouldn't be so sure we're going to see people reach for technological independence this time either.
merelysounds•49m ago
One factor here is people preferring portable devices. Note that portable SSDs are also popular.

Also, cloud storage solutions (like archiving or collaboration) have different usage patterns than AI so far.

prophesi•49m ago
I'm also excited for local LLM's to be capable of assisting with nontrivial coding tasks, but we're far from reaching that point. VRAM remains a huge bottleneck for even a top-of-the-line gaming PC to run them. The best these days for agentic coding that get close to the vibe-check of frontier models seem to be Qwen3-Coder-480B-A35B-Instruct, DeepSeek-Coder-V2-236B, GLM 4.5, and GPT-OSS-120B. The latter being the only one capable of fitting on a 64 to 96GB VRAM machine with quantization.

Of course, the line will always be pushed back as frontier models incrementally improve, but the quality is night and day between these open models consumers can feasibly run versus even the cheaper frontier models.

That said, I too have no interest in this if local models aren't supported and hope that's down the pipeline just so I can try tinkering with it. Though it looks like it utilizes multiple models for various tasks (planner, programmer, reviewer, router, and summarizer) so that only adds to the difficulty of the VRAM bottleneck if you'd like to load different models per task. So I think it makes sense for them to focus on just Claude for now to prove the concept.

edit: I personally use Qwen3 Coder 30B 4bit for both autocomplete and talking to an agent, and switch to a frontier model for the agent when Qwen3 starts running in circles.

cowpig•1h ago
I was excited by the announcement but then

> Runs in an isolated sandbox Every task runs in a secure, isolated Daytona sandbox.

Oh, so fake open source? Daytona is an AGPL-licensed codebase that doesn't actually open-source the control plane, and the first instruction in the README is to sign up for their service.

> From the "open-swe" README:

Open SWE can be used in multiple ways:

* From the UI. You can create, manage and execute Open SWE tasks from the web application. See the 'From the UI' page in the docs for more information.

* From GitHub. You can start Open SWE tasks directly from GitHub issues simply by adding a label open-swe, or open-swe-auto (adding -auto will cause Open SWE to automatically accept the plan, requiring no intervention from you). For enhanced performance on complex tasks, use open-swe-max or open-swe-max-auto labels which utilize Claude Opus 4.1 for both planning and programming. See the 'From GitHub' page in the docs for more information.

* * *

The "from the UI" links to their hosted web interface. If I cannot run it myself it's fake open-source

mitchitized•1h ago
Hol up

How can it be AGPL and not provide full source? AGPL is like the most aggressive of the GPL license variants. If they somehow circumvented the intent behind this license that is a problem.

esafak•55m ago
It's a hosted service with an open source client?
tevon•1h ago
Very cool! Am using it now and really like the sidebar chat that allows you to add context during a run.

I hit an error that was not recoverable. I'd love to see functionality to bring all that context over to a new thread, or otherwise force it to attempt to recover.