frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

This Virus Doesn't Make You Sick. It Makes You Stronger

https://scitechdaily.com/this-virus-doesnt-make-you-sick-it-makes-you-stronger/
1•thelastgallon•5m ago•0 comments

Ghfgh

https://ctxt.io/2/AAD4SkruEA
1•grgt•8m ago•0 comments

Peter Thiel says he told Elon Musk not to give wealth to charity

https://www.reuters.com/world/us/peter-thiel-talk-antichrist-says-he-told-elon-musk-not-give-weal...
1•Cornbilly•8m ago•0 comments

Mixio AI – AILive-Streaming

https://mixio.ai
1•hslater101•9m ago•1 comments

Experience the Magic of Falling Sand

https://sand-blast.org/
1•yuyu74189w•9m ago•1 comments

Will the explainer post go extinct?

https://dynomight.substack.com/p/explainers
1•walterbell•10m ago•0 comments

The Language of the Black Parade

https://blambot.com/pages/the-language-of-the-black-parade
1•gaws•17m ago•0 comments

Project Amethyst: AMD and Sony Interactive Entertainment's Shared Vision [video]

https://www.youtube.com/watch?v=1LCMzw-_dMw
1•croes•18m ago•0 comments

How to Enable SFTP Without Shell Access on Ubuntu

https://www.digitalocean.com/community/tutorials/how-to-enable-sftp-without-shell-access-on-ubunt...
1•thunderbong•21m ago•0 comments

RND1-Base-0910: experimental diffusion LM with 30B params (3B active)

https://huggingface.co/radicalnumerics/RND1-Base-0910
1•jasonjmcghee•22m ago•0 comments

State of AI Report

https://www.stateof.ai/
1•kyahwill•29m ago•0 comments

The Underscore Music Player

https://kottke.org/25/10/the-underscore-music-player
1•tobr•35m ago•0 comments

A trader's 5-minute fix for missing IPO momentum plays

1•kvallans•36m ago•0 comments

Ask HN: Will large scale cross-holdings in US stocks lead to a market crash?

1•roschdal•37m ago•0 comments

When Will Quantum Computing Work?

https://tommccarthy.net/when-quantum.html
1•pongogogo•38m ago•0 comments

Every website builder felt like torture – so I built my own

https://instantsite.app
1•emanuilv•40m ago•0 comments

Laion, the dataset behind Stable Diffusion (2023)

https://www.deeplearning.ai/the-batch/the-story-of-laion-the-dataset-behind-stable-diffusion/
1•thelastgallon•48m ago•0 comments

Ask HN: Laptop for learning intermediate modern sysadmin

1•shivajikobardan•49m ago•2 comments

Response times and what to make of their percentile values

https://www.ombulabs.com/blog/performance/response-times-and-what-to-make-of-their-percentile-val...
1•thelastgallon•50m ago•0 comments

Numair Faraz Is R*

1•kwoii•54m ago•0 comments

Apple Reorganization Moves Health and Fitness to Services

https://www.bloomberg.com/news/articles/2025-10-10/apple-to-move-health-fitness-divisions-to-serv...
1•ksec•1h ago•1 comments

Tom's Data Onion

https://www.tomdalling.com/toms-data-onion/
1•archargelod•1h ago•1 comments

Syneris — Instantly share videos, images, and websites with zero friction

https://syneris.netlify.app
1•brandon22•1h ago•0 comments

Flowcharts vs. Handoffs: a simple math framing

https://blog.rowboatlabs.com/flowcharts-vs-handoffs-a-simple-math-framing/
1•thunderbong•1h ago•0 comments

Battlefield 6 players hit server queues as over 500k concurrents after launch

https://www.pcgamer.com/games/fps/battlefield-6-players-hit-server-queues-as-it-rockets-to-over-5...
2•ksec•1h ago•2 comments

AutomatosX – Multi-agent framework with persistent memory for developers

https://github.com/defai-digital/automatosx
2•akira921•1h ago•1 comments

Filmmaker Mode adapts to daylight to fix dark movies

https://www.flatpanelshd.com/news.php?subaction=showfull&id=1759990647
1•ksec•1h ago•0 comments

Why it took 4 years to get a lock files specification

https://snarky.ca/why-it-took-4-years-to-get-a-lock-files-specification/
1•todsacerdoti•1h ago•0 comments

More than half of entrepreneurs are considering moving to a new country

https://www.cnbc.com/2025/10/10/entrepreneurs-moving-motivations-hsbc-survey.html
1•jnord•1h ago•1 comments

The A.I. Prompt That Could End the World

https://www.nytimes.com/2025/10/10/opinion/ai-destruction-technology-future.html
1•axiomdata316•1h ago•0 comments
Open in hackernews

Bitter lessons building AI products

https://hex.tech/blog/bitter-lessons-building-ai-in-hex-product-management/
25•vinhnx•3h ago

Comments

ninetyninenine•1h ago
The bitterest lesson is that AI is improving. It didn't actually hit a wall. The first product was to early... it failed because AI was not good enough. Back then everyone said we hit a wall.

Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?

He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.

airstrike•1h ago
Not all of us buy into that extrapolation.

> He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

I don't know, ask me again in 50 years.

ninetyninenine•1h ago
Nobody buys into it. That's the problem.

But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.

Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.

airstrike•59m ago
Your argument is a fallacy in three immediate ways:

1. We're not all the same person, to be clear.

2. It's also not the same argument as before. It's not the same extrapolation.

3. And being right or wrong in the past has no bearing on current

NotebookLM doesn't need new AI. It's tool use and context. Tool use is awesome, I've been saying that for ages.

It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"

journal•1h ago
i've not been impressed since gpt3.5
nougati•1h ago
I'm surprised at this, LLMs have had many developments since Gpt3.5, technologically and culturally. What kind of developments would be impressive to you?
oldge•1h ago
This is a common sentiment from my peers who have not spent any real time with the frontier models in the last six months.

They tend to poke the free ChatGPT for ill defined requests and come away disappointed.

exfalso•59m ago
Same experience here, using new models. Every time it's a disappointment. Useful for search queries that are not too specialized. That's it.
sampullman•44m ago
I get pretty good results with Claude code, Codex, and to a lesser extend Jules. It can navigate a large codebase and get me started on a feature in a part of the code I'm not familiar with, and do a pretty good job of summarizing complex modules. With very specific prompts it can write simple features well.

The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.

One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.

journal•1h ago
maybe if openai let me generate an image through api? that would impress me. instead, they took away temperature and gave us verbosity and reasoning effort to think about every time we make an api call.
esafak•57m ago
Then you should be very impressed, because they let you generate videos by API: https://platform.openai.com/docs/models/sora-2

That's a low bar.

Legend2440•1h ago
>AI is not in a bubble. This technology will change the world.

The technology can change the world, and still be a bubble.

Just because neural networks are legit doesn’t mean it’s a smart decision to build $500 billion worth of datacenters.

rf15•1h ago
If AI becomes as good as you claim, there is no need for you. Since it can replace you in every endeavor and be better at it, ANY energy given to you is logically better invested by giving it to the AI. Stop wasting our collective resources.
gsf_emergency_4•1h ago
Rich Sutton, the guy behind both "reinforcement learning" & "the Bitter Lesson", muses that Tech needs to understand the Bitter Lesson better:

https://youtu.be/QMGy6WY2hlM

Longer analysis:

https://youtu.be/21EYKqUsPfg?t=47m28s

To (try and) summarize those in the context of TFA: builders need to distinguish between policy optimisations and program optimisations

I guess a related question to ask (important for both startups and Big Tech) might be: "should one focus on doing things that don't scale?"