frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•4m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•4m ago•0 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•5m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•5m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•6m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•7m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•8m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•8m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•9m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•9m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
2•vedantnair•10m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•11m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•15m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•17m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•18m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•24m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•26m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•26m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•28m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•29m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•30m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•30m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•31m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•33m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•33m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•34m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•38m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•38m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•39m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•39m ago•0 comments
Open in hackernews

Bitter lessons building AI products

https://hex.tech/blog/bitter-lessons-building-ai-in-hex-product-management/
32•vinhnx•3mo ago

Comments

ninetyninenine•3mo ago
The bitterest lesson is that AI is improving. It didn't actually hit a wall. The first product was to early... it failed because AI was not good enough. Back then everyone said we hit a wall.

Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?

He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.

airstrike•3mo ago
Not all of us buy into that extrapolation.

> He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

I don't know, ask me again in 50 years.

ninetyninenine•3mo ago
Nobody buys into it. That's the problem.

But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.

Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.

airstrike•3mo ago
Your argument is a fallacy in three immediate ways:

1. We're not all the same person, to be clear.

2. It's also not the same argument as before. It's not the same extrapolation.

3. And being right or wrong in the past has no bearing on current

NotebookLM doesn't need new AI. It's tool use and context. Tool use is awesome, I've been saying that for ages.

It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"

ninetyninenine•3mo ago
>1. We're not all the same person, to be clear.

No. But you all run under the same label. This is common if you didn't know. For example a certain group of people with certain beliefs can be called republican or democrat or catholic. I didn't name the label explicitly. But you all are in that group. I thought it was obvious I wasn't talking about one person. I don't think you're so stupid as to actually think that so don't pretend you misinterpreted what I said.

>2. It's also not the same argument as before. It's not the same extrapolation.

Seems like the same argument to me, you thought LLMs were stochastic parrots and inherently and forever limited by it's very nature (a statement made with no proof).

The extrapolation is the same since the dawn of AI: upwards. We may hit a wall, but nobody can know this for sure.

>3. And being right or wrong in the past has no bearing on current

It does. Past performance is a good predictor of current performance. It's also common sense, why else do we have resumes?

You were wrong before, chances are... you'll be wrong again.

>It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"

You just make this statement without any supporting evidence? It's just wrong because you say so?

This is my statement: How about the trendline points to an eventual future that remains an open possibility due to a trendline...

versus your conclusion which is "it's wrong"

journal•3mo ago
i've not been impressed since gpt3.5
nougati•3mo ago
I'm surprised at this, LLMs have had many developments since Gpt3.5, technologically and culturally. What kind of developments would be impressive to you?
oldge•3mo ago
This is a common sentiment from my peers who have not spent any real time with the frontier models in the last six months.

They tend to poke the free ChatGPT for ill defined requests and come away disappointed.

exfalso•3mo ago
Same experience here, using new models. Every time it's a disappointment. Useful for search queries that are not too specialized. That's it.
sampullman•3mo ago
I get pretty good results with Claude code, Codex, and to a lesser extend Jules. It can navigate a large codebase and get me started on a feature in a part of the code I'm not familiar with, and do a pretty good job of summarizing complex modules. With very specific prompts it can write simple features well.

The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.

One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.

ninetyninenine•3mo ago
Company expectations are higher too. Many companies expect 10x output now due to AI, but the technology has been growing so quick that there are a lot of people/companies who haven't realized that we're in the middle of a paradigm shift.

If you're not using AI for 60-70 percent of your code, you are behind. And yes 200 per month for AI is required.

fragmede•3mo ago
We've been trialing code rabbit at work for code review. I have various nits to pick but it feels like a good addition.
journal•3mo ago
maybe if openai let me generate an image through api? that would impress me. instead, they took away temperature and gave us verbosity and reasoning effort to think about every time we make an api call.
esafak•3mo ago
Then you should be very impressed, because they let you generate videos by API: https://platform.openai.com/docs/models/sora-2

That's a low bar.

Legend2440•3mo ago
>AI is not in a bubble. This technology will change the world.

The technology can change the world, and still be a bubble.

Just because neural networks are legit doesn’t mean it’s a smart decision to build $500 billion worth of datacenters.

kingstnap•3mo ago
You are right we should've built $5 trillion /s.
aloha2436•3mo ago
The internet was a bubble! Somewhat after, it took over planet earth. But it was also a bubble.
rf15•3mo ago
If AI becomes as good as you claim, there is no need for you. Since it can replace you in every endeavor and be better at it, ANY energy given to you is logically better invested by giving it to the AI. Stop wasting our collective resources.
ninetyninenine•3mo ago
It can. That's the future bro. It replace me, you and all of us.

You're dropping that line as if it's absurd. Be realistic. Dark conclusions are not automatically illogical. If the logic points to me being replaced, then that's just reality.

Right now we don't know if I (aka you) will be replaced, but trendlines point to it as a possiblity.

gsf_emergency_4•3mo ago
Rich Sutton, the guy behind both "reinforcement learning" & "the Bitter Lesson", muses that Tech needs to understand the Bitter Lesson better:

https://youtu.be/QMGy6WY2hlM

Longer analysis:

https://youtu.be/21EYKqUsPfg?t=47m28s

To (try and) summarize those in the context of TFA: builders need to distinguish between policy optimisations and program optimisations

I guess a related question to ask (important for both startups and Big Tech) might be: "should one focus on doing things that don't scale?"