frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
2•witnessme•3m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
1•aloukissas•7m ago•0 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•9m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•15m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
3•alephnerd•18m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•18m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•21m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•22m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•33m ago•4 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•34m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•36m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•38m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•40m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•52m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•54m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•55m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•56m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
41•chwtutha•1h ago•7 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments
Open in hackernews

Ask HN: Is GPT-5 a regression, or is it just me?

6•technocratius•4mo ago
Context: I have been using GPT5 since its release over a month ago, within my Plus subscription. Before this release, I heavily relied on gpt-o3 for most complex tasks, with 4o for simple question. I use it for a mix of scientific literature websearch for e.g. understanding health related topics, the occasional coding assistance and helping me out with *nix sysadmin related tasks. Note that I have not used its API or integration with an IDE.

Based on a month of GPT5 usage, this model feels like primarily like a regression:

1. It's slow: thinking mode can take ages, and sometimes gets downright stuck. It's auto-assessment of whether or not it needs to think feels poorly tuned to most tasks and defaults too easily to going into deep reasoning mode.

2. Hallucinations are in overdrive: I would assess that in 7/10 tasks, hallucinations continuously clutter the responses and warrant corrections and careful monitoring and steering back. It hallucinates list items from your prompt that weren't there, software package functionalities/capabilities and CLI parameters etc. Even thorough prompting with explicit linking to sources, e.g. also wihtin deep research frequently goes of the rails.

3. Not self critical: even in thinking mode, it frequently spews out incorrect stuff, that a blatant "this is not correct, check your answer" can directly correct.

Note: I am not a super advanced prompt engineer, and this above assessment is mainly wrt the previous generation of models. I would expect that with progression of model capabilities, the need for users to apply careful prompt engineering goes down, not up.

I am very curious to hear your experiences.

Comments

patrakov•4mo ago
You are not alone.

Another pet peeve is that it, when asked to provide several possible solutions, sometimes generates two that are identical but with different explanations.

technocratius•4mo ago
Ah yes, I've had similar experiences actually. Also the variant where I ask it to provide an alternate solution/answer to the one it gave, where it than proceeds to basically regurgitate its previous answer with slight stylistic (i.e. maintaining content parity) modifications.
lcnPylGDnU4H9OF•4mo ago
GPT-5 was an upgrade for investors. The primary feature of it is to use a router that will decide between a stronger model and a weaker one for a given query. The goal is to reduce operating costs without regard to improving the user experience, while they market it as "new and improved".
android521•4mo ago
Hallucinations are definitely up at least 5X compared with gpt4 from my personal experience.
NoahZuniga•4mo ago
It's just you
theturtlemoves•4mo ago
"Ahhh, you're right, now it's clear, that explains it....."

Yeah, you're not alone. I've even been getting responses with contradictions within the same sentence. ("X won't work therefore you should use X")

incomingpain•4mo ago
I saw the significant difference in gpt5. If someone were using mostly or just gpt4 before, then it might be a culture shock of difference type situation.

Me who actively uses claude, gemini, perplexity, and a whole gamut of local LLMs.

The personality of the models are different and so when gpt5 came along, it wasnt really a surprise to me.