frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•1m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•1m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•2m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•7m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•13m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•15m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•19m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•21m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•27m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•31m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•31m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•35m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•36m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•38m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•40m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•43m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•44m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•46m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•47m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•49m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•52m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•57m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•59m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments
Open in hackernews

The computational cost of corporate rebranding

5•rileygersh•7mo ago
Coke Classic, er, I mean HBO Max is Back!

This got me thinking about how corporate rebranding creates unexpected costs in AI training and inference.

Consider HBO's timeline: - 2010: HBO Go - 2015: HBO Now - 2020: HBO Max - 2023: Max - 2025: HBO Max (they're back)

LLMs trained on different time periods will have completely different "correct" answers about what Warner Bros' streaming service is called. A model trained in 2022 will confidently tell you it's "HBO Max." A model trained in 2024 will insist it's "Max."

This creates real computational overhead. Similar to how politeness tokens like "please" and "thank you" add millions to inference costs across all queries, these brand inconsistencies require extra context switching and disambiguation.

But here's where it gets interesting: does Grok 4 have an inherent advantage with the Twitter to X transition because it's trained by X? While ChatGPT, Claude, and Gemini need additional compute to handle the naming confusion, Grok's training data includes the internal reasoning behind the rebrand.

The same logic applies to Apple's iOS 18→26 jump. Apple Intelligence will inherently understand: - Why iOS skipped from 18 to 26 (year-based alignment) - Which features correspond to which versions - How to handle legacy documentation references

Meanwhile, third-party models will struggle with pattern matching (expecting iOS 19, 20, 21...) and risk generating incorrect version predictions in developer documentation.

This suggests we're entering an era of "native AI advantage" - where the AI that knows your ecosystem best isn't necessarily the smartest general model, but the one trained by the company making the decisions.

Examples: - Google's Gemini understanding Android versioning and API deprecations - Microsoft's Copilot knowing Windows/Office internal roadmaps - Apple Intelligence handling iOS/macOS feature timelines

For developers, this has practical implications: - Documentation generation tools may reference wrong versions - API integration helpers might suggest deprecated endpoints - Code completion could assume incorrect feature availability

The computational cost isn't just about training - it's about ongoing inference overhead every time these models encounter ambiguous brand references.