frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•4m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•5m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•8m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
4•chwtutha•9m ago•0 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
2•osnium123•9m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
1•jeremy_su•11m ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•13m ago•0 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•19m ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•21m ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•32m ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
2•thread_id•33m ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•34m ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•37m ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
1•paladin314159•38m ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•39m ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•39m ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•40m ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
2•medbar•41m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•42m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•42m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•42m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•45m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•49m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•50m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•54m ago•1 comments

Ask HN: The Coming Class War

2•fud101•54m ago•4 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•56m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
2•petethomas•57m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•57m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•1h ago•1 comments
Open in hackernews

Ask HN: What if the AI scaling plateau is just a "false dip"?

1•massicerro•4w ago
First of all, I’m Italian, and since I don’t feel confident enough to write this post in English myself, I used Gemini to translate my thoughts into the text below.

The Premise: There has been a lot of talk lately about the possibility that AI development (as we currently know it) is approaching a plateau. While I don't personally agree with this hypothesis, it is undeniably a common sentiment in the industry right now, so it’s worth investigating.

We have seen that increasing the number of parameters or "scaling up" a neural network doesn't always yield immediate linear improvements. With certain versions of ChatGPT, many users perceived a degradation in performance despite the underlying network complexity presumably being increased.

My Theory: Is it possible that we are seeing a "complexity dip"? In other words, could there be a phase where increasing complexity initially causes a drop in performance, only to be followed by a new phase where that same complexity allows for superior emergent properties?

To simplify, let’s imagine a hypothetical scale where we compare "Complexity" (parameters/compute) vs. "Performance." For example:

LLM: Chat GPT 3 // Complexity Level 1 // Performace 0.2

LLM: Chat GPT 3.5 // Complexity Level 10 // Performance 0.5

LLM: Chat GPT 4 // Complexity Level 100 // Performance 0.75

LLM: Chat GPT 4.2 // Complexity Level 1000 // Performance 0.6 (The "False Plateau" / Performance degradation)

LLM: Chat GPT 4.2X // Complexity Level 10000 // Performance 0.5 (Further degradation due to unmanaged complexity)

LLM: Chat GPT 6 // Complexity Level 100000 // Performance 0.8 (The "breakthrough": new abilities emerge)

LLM: Chat GPT 7 // Complexity Level 1000000 // Performance 0.99 (Potential AGI / Peak performance)

The Risk: The real problem here is economic and psychological. If we are currently in the "GPT-4.x" phase of this example, the industry might stop investing because the returns look negative. We might never reach the "GPT-6" level simply because we mistook a temporary dip for a permanent ceiling.

I’m curious to hear your thoughts. Have we seen similar "dips" in other complex systems before a new level of organization emerges? Or is the plateau a hard physical limit?

Comments

chrisjj•4w ago
> With certain versions of ChatGPT, many users perceived a degradation in performance despite the underlying network complexity presumably being increased.

Perhaps the cause is simply the presumption?

massicerro•4w ago
Of course, the 'presumption' of increased complexity or the 'subjective perception' of a drop in performance might be the cause. But we are missing the real point here: the 'false plateau.' Regardless of user perception, is it possible that a 'false plateau' exists that keeps us away from a major leap in performance? The risk is that the simple 'perception of having taken the wrong path' by researchers or companies would lead them to ignore the possibility of such a 'false plateau'...
funkyfiddler69•3w ago
> the simple 'perception of having taken the wrong path' by researchers or companies

IMO, neither the plateau nor the perception of "a wrong path" are real. There are too many paths and we have too few humans with adequately capable brains.

Companies talk for the agenda's sake and thus the kick of the surprise. It's a marketing thing.

AI R&D is basically thinking out loud nowadays. It's just the pace of the news.

I believe that most AI development has reached "the end" of a logarithmic curve. The assigned humans will catch up. Then we'll see faster growth again. It takes time to get from one edge to the other or walk along it or explore the area.

The progress is there but it's infinitely small compared to the past years where it was relatively simple to get better results over and over and nobody will get it except if they are sensitized to it.

What kind of major leap in performance do you expect? What do others expect? Be specific and people will tell you whether there is a plateau or not enough hands on deck working on specific problems.