frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•17s ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•1m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•1m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
1•captainnemo729•1m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•1m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•3m ago•0 comments

Japanese rice is the most expensive in the world

https://www.cnn.com/2026/02/07/travel/this-is-the-worlds-most-expensive-rice-but-what-does-it-tas...
1•mooreds•4m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•4m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•4m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•4m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•4m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•5m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•5m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•6m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•6m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•9m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•9m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•10m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•11m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•12m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•12m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•13m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•13m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•14m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•16m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•17m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•20m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•21m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•22m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•25m ago•0 comments
Open in hackernews

What Does a Post-Google Internet Look Like?

https://matduggan.com/what-does-a-post-google-internet-look-like/
77•fside•7mo ago

Comments

palata•7mo ago
> Paid services like Kogi will be

Typo: it's called "Kagi"

xg15•7mo ago
Tell me again how LLMs were going to make everything better?
raxxorraxor•7mo ago
I use them for coding and other AI for image/video generation. It is awesome.

But I have yet to see a convincing consumer product leveraging LLM AI. In our company it prefilters support mails. Tries to connect customer requests to customer information in our databases. Maybe connect device information if a mail contains serial numberes. Some convenient stuff here and there to provide more context, but it certainly doesn't have any critical roles.

And developing LLM services takes a huge amount of time, comparable to other software projects.

dzonga•7mo ago
for now - we can identify AI slop.

in the future - yeah interaction will be at a premium and that includes all textual content - since you can't trust whether it was generated by AI or not. which means books / content pre-AI gonna more valuable.

Later on things will need to be human curated.

Havoc•7mo ago
If you genuinely can’t tell the difference anymore is there any motivation to pay a premium?
bigfishrunning•7mo ago
Only to keep human authors employed -- Once an art is lost, it's lost, and the llm cartel can only continue to operate at a loss for so long before the piper needs paid
hiccuphippo•7mo ago
Human authors will not write but direct the AIs on what to write. Then they'll syndicate that to AI companies so they can keep their AI fresh. Like an ouroboros eating its own tail.
notnullorvoid•7mo ago
It depends, can you not tell the difference because you don't know better, or you can't tell the difference because the quality and accuracy is actually good.
hiccuphippo•7mo ago
https://xkcd.com/810/
Jubijub•7mo ago
It’s a dark take, but I mostly agree with it. The ad model, with all its flaws, is a flexible model, and it has funded the web for a while.

The part that will break is that a lot of sites will have 0 incentives to continue to publish, in the face of 0 revenue and 0 credit. That will degrade the quality/ relevance of LLMs. I also think that “guaranteed produced by humans “ will have value.

shmeeed•7mo ago
This is a chilling outlook, but it looks very plausible.

The process has already started, the building blocks are in place. See the recent rise in public complaints about intense scraper activity. Zero-Click has become all but inescapable, and is going to capture an ever-growing share of searches.

Still, any paradigm shift also gives room for hope. There will be pitfalls for Moloch, they might just trip over their own ambitions. And maybe there will be opportunity for organic growth to take root in the cracks of their foundations.

palata•7mo ago
In one sentence, this is what I have been saying about LLMs since they got impressive:

"I don't know what good it can make, but this thing clearly has the potential to break the Internet".

GuB-42•7mo ago
> FAANG isn't going to be hiring at anywhere near the same rate they were before, because they won't need to.

They already didn't need to. The reason they hire as much as they do is to keep their lead, essentially denying potential competitors of growth potential by taking the best engineers and trying every crazy idea before anyone else. Or so I think.

And even though they overhire, it doesn't mean they can do huge layoffs without consequences, as the structure is now built around a massive workforce.

blacksmith_tb•7mo ago
> "The internet is going to return to more of its original roots, which are niche fan websites you largely find through social media or word of mouth"

Ok, that sounds possible, and not so bad...

> "Very few of them are going to survive... it costs a lot of money to pay people to write high quality content"

That may also be true, but doesn't really jibe with the image of the early 'net?

weregiraffe•7mo ago
Early net was tiny.
hdjdbdirbrbtv•7mo ago
But also unfathomably large because of the speed at which you could consume it was measured in kbps....
krageon•7mo ago
It may have been tiny, but it was more useful than what we have today
theodorewiles•7mo ago
I think end state is LLM-facilitated micropayments. One vast clearinghouse / marketplace of human-generated up to date content. Contributors get paid based on whether LLMs called their content via some kind of RAG. Maybe there are multiple aggregators / publishers.
homeonthemtn•7mo ago
Yes and there will still be a "free" version of content somewhere because not everyone will pay those prices (because they can't afford them or didn't know the value of a premium service).

So since there will be a demand, someone will produce "free resources" which will be the equivalent of a sewage outlet running into the sea.

Absolute garbage of an AI ouroborus just churning in on itself to aggregate and produce something for whatever this free channel of content ends up being.