frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

An Update on Pytype

https://github.com/google/pytype
4•mxmlnkn•3m ago•0 comments

Is the A.I. Sell-Off the Start of Something Bigger?

https://www.nytimes.com/2025/08/20/business/dealbook/ai-dip-blip-palantir-nvidia.html
1•voxadam•4m ago•1 comments

How harmful is blue light for sleep?

https://www.nytimes.com/2025/08/17/well/health-effects-blue-light-screen-use.html
1•bookofjoe•7m ago•1 comments

US Health Secretary Ends Decades of Research into Environmental Causes of Autism

https://www.propublica.org/article/rfk-jr-autism-environment-research-funding
1•klipt•7m ago•0 comments

CSS line-height unit 1h

https://caniuse.com/mdn-css_types_length_lh
1•Brajeshwar•8m ago•0 comments

American Millennials Are Dying at an Alarming Rate

https://slate.com/technology/2025/08/millennials-gen-z-death-rates-america-high.html
1•damien•8m ago•0 comments

The Four Stages of Objective-Smalltalk

https://blog.metaobject.com/2019/12/the-4-stages-of-objective-smalltalk.html
1•thunderbong•9m ago•0 comments

L2AW Theorem

https://law-theorem.com/
1•avinassh•10m ago•0 comments

The Pragmatic Engineer 2025 Survey: What's in your tech stack? Part 2

https://newsletter.pragmaticengineer.com/p/the-pragmatic-engineer-2025-survey-part-2
1•CharlesW•12m ago•0 comments

Dagger and opencode and agnostic agents and SSH app = most portable dev kit

3•epuerta99•14m ago•0 comments

Crash Cows

https://beza1e1.tuxen.de/lore/crash_cows.html
4•indrora•15m ago•0 comments

What went wrong with Social Media?

https://arun626588.substack.com/p/what-went-wrong-with-social-media
1•rohannihalani•15m ago•0 comments

Openwetware.org shut down due to funding

https://openwetware.org/
1•eldenring•15m ago•0 comments

James Webb Space Telescope runs an extended version of JavaScript [pdf]

https://www.stsci.edu/~idash/pub/dashevsky0607rcsgso.pdf
2•homebrewer•15m ago•0 comments

Travel eSIMs route traffic over Chinese and undisclosed networks: study

https://www.itnews.com.au/news/travel-esims-secretly-route-traffic-over-chinese-and-undisclosed-networks-study-619659
3•taubek•16m ago•0 comments

Cool or Hard

https://belief.horse/notes/cool-or-hard/
1•doctorhandshake•16m ago•0 comments

For decades, sleep has been passive

https://xcancel.com/dwdavison/status/1957972610202960005#m
1•palmfacehn•18m ago•0 comments

Notes on Image Generation with GPT-4.1

https://taoofmac.com/space/notes/2025/07/20/1230
1•rcarmo•19m ago•0 comments

The reason the West is warmongering against China

https://www.aljazeera.com/opinions/2025/8/3/the-real-reason-the-west-is-warmongering-against-china
3•Qem•19m ago•0 comments

Integrating Jenkins with AEM Deployments

https://aemslate.com/integrating-jenkins-with-aem-deployments
1•a-blank-slate•19m ago•0 comments

Disk Sampling on the Sphere

https://observablehq.com/@jrus/spheredisksample
3•jacobolus•19m ago•0 comments

Just Write

https://www.moll.dev/notes/justwrite/
3•mooreds•20m ago•0 comments

A proposal for inline LLM instructions in HTML based on llms.txt

https://vercel.com/blog/a-proposal-for-inline-llm-instructions-in-html
3•brycewray•21m ago•0 comments

Hx-optimistic: Declarative optimistic updates for Htmx

https://www.lorenstew.art/blog/hx-optimistic/
1•lorenstewart•22m ago•0 comments

Show HN: Yellhorn – MCP server to help coding agents 1-shot long tasks

https://github.com/msnidal/yellhorn-mcp
1•sravanjayanthi•27m ago•1 comments

REITs Buying Tranches of Single-Family Homes (2024)

https://finance.yahoo.com/news/other-side-hedge-funds-reits-180055854.html
3•danielam•28m ago•0 comments

ComputerRL: Scaling Reinforcement Learning for Computer Use Agents

https://arxiv.org/abs/2508.14040
1•cjbarber•30m ago•0 comments

Processing 24T tokens for LLM training with 0 crashes (what made it possible)

https://www.daft.ai/blog/how-essential-ai-built-essential-web-v1-with-daft
1•DISCURSIVE•32m ago•0 comments

Digg.com Is Back

https://www.digg.com/
49•thatgerhard•33m ago•35 comments

Show HN: A new JavaScript runtime for writing high-performance web apps in Rust

https://www.npmjs.com/package/brahma-firelight
1•StellaMary•34m ago•1 comments
Open in hackernews

Show HN: Randomly switching between LMs at every step boosts SWE-bench score

https://www.swebench.com/SWE-bench/blog/2025/08/19/mini-roulette/
5•lieret•1h ago
What if your agent uses a different LM at every turn? We let mini-SWE-agent randomly switch between GPT-5 and Sonnet 4 and it scored higher on SWE-bench than with either model separately.

GPT-5 by itself gets 65.0%, Sonnet 4 64.8%, but randomly switching at every step gets us 67.2%

This result came pretty surprising to us. There's a few more experiments in the blog post.

Comments

NitpickLawyer•1h ago
This is really cool! And even cooler that it's tested on their mini agent harness (only has access to "terminal", no other tools) because this implies it's "raw model power" rather than "software glue".

My speculation: this is an "emergent" capability out of good / scalable / "solved" RL. Both Anthropic and oAI seem to have made huge advances in RL. (xAI as well, but haven't yet released their coding model, so we'll see if that continues). In contrast to other RLd models out there (i.e. the deepseeks, the qwens, etc) that score really well on tasks similar to those in benchmarks, both claude4 and gpt5 seem to have "learned" what agentic means at a different level. They can be guided through tasks, asked to do one particular subpart of a task, or a particular approach, etc. And they do it well. The other implementations feel "stubborn". Can't explain it better.

It will be interesting to see what Gemini3 will bring about. goog / deepmind are experts at RL, and gemini2.5 is a bit too old now, so curious to see what they can deliver on this front. My guess is that we'll see the same kind of "it gets it" after scaled RL.

One note, that I've made after using gpt5 for a bit is that it seems to have this "gettheritis" with solving tasks. It wants to solve them so bad, that sometimes it forgets the plan, or rushes through step 5 after solving 1-4 pretty throughly. Might be prompting as well, maybe those havent yet caught up.